00:00:00.004 Started by upstream project "autotest-per-patch" build number 132341 00:00:00.004 originally caused by: 00:00:00.004 Started by user sys_sgci 00:00:00.102 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:01.466 The recommended git tool is: git 00:00:01.466 using credential 00000000-0000-0000-0000-000000000002 00:00:01.468 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:01.482 Fetching changes from the remote Git repository 00:00:01.484 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:01.498 Using shallow fetch with depth 1 00:00:01.498 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:01.498 > git --version # timeout=10 00:00:01.511 > git --version # 'git version 2.39.2' 00:00:01.511 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:01.525 Setting http proxy: proxy-dmz.intel.com:911 00:00:01.525 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.659 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.673 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.688 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:09.688 > git config core.sparsecheckout # timeout=10 00:00:09.702 > git read-tree -mu HEAD # timeout=10 00:00:09.719 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:09.746 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:09.746 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.866 [Pipeline] Start of Pipeline 00:00:09.880 [Pipeline] library 00:00:09.882 Loading library shm_lib@master 00:00:09.882 Library shm_lib@master is cached. Copying from home. 00:00:09.894 [Pipeline] node 00:00:09.902 Running on GP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.903 [Pipeline] { 00:00:09.910 [Pipeline] catchError 00:00:09.911 [Pipeline] { 00:00:09.923 [Pipeline] wrap 00:00:09.931 [Pipeline] { 00:00:09.936 [Pipeline] stage 00:00:09.937 [Pipeline] { (Prologue) 00:00:10.159 [Pipeline] sh 00:00:10.442 + logger -p user.info -t JENKINS-CI 00:00:10.461 [Pipeline] echo 00:00:10.462 Node: GP6 00:00:10.469 [Pipeline] sh 00:00:10.768 [Pipeline] setCustomBuildProperty 00:00:10.777 [Pipeline] echo 00:00:10.779 Cleanup processes 00:00:10.782 [Pipeline] sh 00:00:11.063 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.063 1891173 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.076 [Pipeline] sh 00:00:11.361 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.361 ++ grep -v 'sudo pgrep' 00:00:11.361 ++ awk '{print $1}' 00:00:11.361 + sudo kill -9 00:00:11.361 + true 00:00:11.377 [Pipeline] cleanWs 00:00:11.386 [WS-CLEANUP] Deleting project workspace... 00:00:11.386 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.392 [WS-CLEANUP] done 00:00:11.396 [Pipeline] setCustomBuildProperty 00:00:11.409 [Pipeline] sh 00:00:11.688 + sudo git config --global --replace-all safe.directory '*' 00:00:11.761 [Pipeline] httpRequest 00:00:12.144 [Pipeline] echo 00:00:12.145 Sorcerer 10.211.164.20 is alive 00:00:12.179 [Pipeline] retry 00:00:12.181 [Pipeline] { 00:00:12.189 [Pipeline] httpRequest 00:00:12.193 HttpMethod: GET 00:00:12.193 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.194 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.202 Response Code: HTTP/1.1 200 OK 00:00:12.202 Success: Status code 200 is in the accepted range: 200,404 00:00:12.202 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:42.209 [Pipeline] } 00:00:42.225 [Pipeline] // retry 00:00:42.232 [Pipeline] sh 00:00:42.521 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:42.538 [Pipeline] httpRequest 00:00:42.926 [Pipeline] echo 00:00:42.927 Sorcerer 10.211.164.20 is alive 00:00:42.937 [Pipeline] retry 00:00:42.939 [Pipeline] { 00:00:42.952 [Pipeline] httpRequest 00:00:42.957 HttpMethod: GET 00:00:42.958 URL: http://10.211.164.20/packages/spdk_ecdb65a23056454e365d492c34f3c84761b7d038.tar.gz 00:00:42.958 Sending request to url: http://10.211.164.20/packages/spdk_ecdb65a23056454e365d492c34f3c84761b7d038.tar.gz 00:00:42.964 Response Code: HTTP/1.1 200 OK 00:00:42.964 Success: Status code 200 is in the accepted range: 200,404 00:00:42.965 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_ecdb65a23056454e365d492c34f3c84761b7d038.tar.gz 00:05:02.437 [Pipeline] } 00:05:02.454 [Pipeline] // retry 00:05:02.462 [Pipeline] sh 00:05:02.756 + tar --no-same-owner -xf spdk_ecdb65a23056454e365d492c34f3c84761b7d038.tar.gz 00:05:06.058 [Pipeline] sh 00:05:06.347 + git -C spdk log --oneline -n5 00:05:06.347 ecdb65a23 bdev: Rename _bdev_memory_domain_io_get_buf() to bdev_io_get_bounce_buf() 00:05:06.347 6745f139b bdev: Relocate _bdev_memory_domain_io_get_buf_cb() close to _bdev_io_submit_ext() 00:05:06.347 866ba5ffe bdev: Factor out checking bounce buffer necessity into helper function 00:05:06.347 57b682926 bdev: Add spdk_dif_ctx and spdk_dif_error into spdk_bdev_io 00:05:06.347 3b58329b1 bdev: Use data_block_size for upper layer buffer if no_metadata is true 00:05:06.359 [Pipeline] } 00:05:06.374 [Pipeline] // stage 00:05:06.383 [Pipeline] stage 00:05:06.385 [Pipeline] { (Prepare) 00:05:06.403 [Pipeline] writeFile 00:05:06.418 [Pipeline] sh 00:05:06.706 + logger -p user.info -t JENKINS-CI 00:05:06.719 [Pipeline] sh 00:05:07.006 + logger -p user.info -t JENKINS-CI 00:05:07.020 [Pipeline] sh 00:05:07.308 + cat autorun-spdk.conf 00:05:07.308 SPDK_RUN_FUNCTIONAL_TEST=1 00:05:07.308 SPDK_TEST_NVMF=1 00:05:07.308 SPDK_TEST_NVME_CLI=1 00:05:07.308 SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:07.308 SPDK_TEST_NVMF_NICS=e810 00:05:07.308 SPDK_TEST_VFIOUSER=1 00:05:07.308 SPDK_RUN_UBSAN=1 00:05:07.308 NET_TYPE=phy 00:05:07.318 RUN_NIGHTLY=0 00:05:07.324 [Pipeline] readFile 00:05:07.356 [Pipeline] withEnv 00:05:07.359 [Pipeline] { 00:05:07.372 [Pipeline] sh 00:05:07.660 + set -ex 00:05:07.660 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:05:07.660 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:07.660 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:07.660 ++ SPDK_TEST_NVMF=1 00:05:07.660 ++ SPDK_TEST_NVME_CLI=1 00:05:07.660 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:07.660 ++ SPDK_TEST_NVMF_NICS=e810 00:05:07.660 ++ SPDK_TEST_VFIOUSER=1 00:05:07.660 ++ SPDK_RUN_UBSAN=1 00:05:07.660 ++ NET_TYPE=phy 00:05:07.660 ++ RUN_NIGHTLY=0 00:05:07.660 + case $SPDK_TEST_NVMF_NICS in 00:05:07.660 + DRIVERS=ice 00:05:07.660 + [[ tcp == \r\d\m\a ]] 00:05:07.660 + [[ -n ice ]] 00:05:07.660 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:05:07.660 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:05:10.995 rmmod: ERROR: Module irdma is not currently loaded 00:05:10.995 rmmod: ERROR: Module i40iw is not currently loaded 00:05:10.995 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:05:10.995 + true 00:05:10.995 + for D in $DRIVERS 00:05:10.995 + sudo modprobe ice 00:05:10.995 + exit 0 00:05:11.007 [Pipeline] } 00:05:11.020 [Pipeline] // withEnv 00:05:11.025 [Pipeline] } 00:05:11.039 [Pipeline] // stage 00:05:11.048 [Pipeline] catchError 00:05:11.050 [Pipeline] { 00:05:11.062 [Pipeline] timeout 00:05:11.063 Timeout set to expire in 1 hr 0 min 00:05:11.064 [Pipeline] { 00:05:11.078 [Pipeline] stage 00:05:11.080 [Pipeline] { (Tests) 00:05:11.094 [Pipeline] sh 00:05:11.384 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:11.384 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:11.384 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:11.384 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:05:11.384 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:11.384 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:11.384 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:05:11.384 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:11.384 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:05:11.384 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:05:11.384 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:05:11.384 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:05:11.384 + source /etc/os-release 00:05:11.384 ++ NAME='Fedora Linux' 00:05:11.384 ++ VERSION='39 (Cloud Edition)' 00:05:11.384 ++ ID=fedora 00:05:11.384 ++ VERSION_ID=39 00:05:11.384 ++ VERSION_CODENAME= 00:05:11.384 ++ PLATFORM_ID=platform:f39 00:05:11.384 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:05:11.384 ++ ANSI_COLOR='0;38;2;60;110;180' 00:05:11.384 ++ LOGO=fedora-logo-icon 00:05:11.384 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:05:11.384 ++ HOME_URL=https://fedoraproject.org/ 00:05:11.384 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:05:11.384 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:05:11.384 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:05:11.384 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:05:11.384 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:05:11.384 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:05:11.384 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:05:11.384 ++ SUPPORT_END=2024-11-12 00:05:11.384 ++ VARIANT='Cloud Edition' 00:05:11.384 ++ VARIANT_ID=cloud 00:05:11.384 + uname -a 00:05:11.384 Linux spdk-gp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:05:11.384 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:12.324 Hugepages 00:05:12.325 node hugesize free / total 00:05:12.584 node0 1048576kB 0 / 0 00:05:12.584 node0 2048kB 0 / 0 00:05:12.584 node1 1048576kB 0 / 0 00:05:12.584 node1 2048kB 0 / 0 00:05:12.584 00:05:12.584 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:12.584 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:12.584 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:12.584 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:12.584 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:12.584 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:12.584 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:12.584 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:12.584 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:12.584 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:12.584 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:12.584 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:12.584 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:12.584 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:12.584 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:12.584 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:12.584 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:12.584 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:12.584 + rm -f /tmp/spdk-ld-path 00:05:12.584 + source autorun-spdk.conf 00:05:12.584 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:12.584 ++ SPDK_TEST_NVMF=1 00:05:12.584 ++ SPDK_TEST_NVME_CLI=1 00:05:12.584 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:12.584 ++ SPDK_TEST_NVMF_NICS=e810 00:05:12.584 ++ SPDK_TEST_VFIOUSER=1 00:05:12.584 ++ SPDK_RUN_UBSAN=1 00:05:12.584 ++ NET_TYPE=phy 00:05:12.584 ++ RUN_NIGHTLY=0 00:05:12.584 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:05:12.584 + [[ -n '' ]] 00:05:12.584 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:12.584 + for M in /var/spdk/build-*-manifest.txt 00:05:12.584 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:05:12.584 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:12.584 + for M in /var/spdk/build-*-manifest.txt 00:05:12.584 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:05:12.584 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:12.584 + for M in /var/spdk/build-*-manifest.txt 00:05:12.584 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:05:12.584 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:05:12.584 ++ uname 00:05:12.584 + [[ Linux == \L\i\n\u\x ]] 00:05:12.584 + sudo dmesg -T 00:05:12.584 + sudo dmesg --clear 00:05:12.585 + dmesg_pid=1893136 00:05:12.585 + [[ Fedora Linux == FreeBSD ]] 00:05:12.585 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:12.585 + sudo dmesg -Tw 00:05:12.585 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:05:12.585 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:05:12.585 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:05:12.585 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:05:12.585 + [[ -x /usr/src/fio-static/fio ]] 00:05:12.585 + export FIO_BIN=/usr/src/fio-static/fio 00:05:12.585 + FIO_BIN=/usr/src/fio-static/fio 00:05:12.585 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:05:12.585 + [[ ! -v VFIO_QEMU_BIN ]] 00:05:12.585 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:05:12.585 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:12.585 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:05:12.585 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:05:12.585 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:12.585 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:05:12.585 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:12.854 06:15:44 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:12.854 06:15:44 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:12.854 06:15:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:05:12.854 06:15:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:05:12.854 06:15:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:05:12.854 06:15:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:05:12.854 06:15:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:05:12.854 06:15:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:05:12.854 06:15:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:05:12.854 06:15:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:05:12.854 06:15:44 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:05:12.854 06:15:44 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:05:12.854 06:15:44 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:12.854 06:15:44 -- common/autotest_common.sh@1690 -- $ [[ n == y ]] 00:05:12.854 06:15:44 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:12.854 06:15:44 -- scripts/common.sh@15 -- $ shopt -s extglob 00:05:12.854 06:15:44 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:05:12.854 06:15:44 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.854 06:15:44 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.854 06:15:44 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.854 06:15:44 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.854 06:15:44 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.854 06:15:44 -- paths/export.sh@5 -- $ export PATH 00:05:12.854 06:15:44 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.854 06:15:44 -- common/autobuild_common.sh@485 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:12.854 06:15:44 -- common/autobuild_common.sh@486 -- $ date +%s 00:05:12.854 06:15:44 -- common/autobuild_common.sh@486 -- $ mktemp -dt spdk_1732079744.XXXXXX 00:05:12.854 06:15:44 -- common/autobuild_common.sh@486 -- $ SPDK_WORKSPACE=/tmp/spdk_1732079744.fjZMdQ 00:05:12.854 06:15:44 -- common/autobuild_common.sh@488 -- $ [[ -n '' ]] 00:05:12.854 06:15:44 -- common/autobuild_common.sh@492 -- $ '[' -n '' ']' 00:05:12.854 06:15:44 -- common/autobuild_common.sh@495 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:05:12.854 06:15:44 -- common/autobuild_common.sh@499 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:05:12.854 06:15:44 -- common/autobuild_common.sh@501 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:05:12.854 06:15:44 -- common/autobuild_common.sh@502 -- $ get_config_params 00:05:12.854 06:15:44 -- common/autotest_common.sh@407 -- $ xtrace_disable 00:05:12.854 06:15:44 -- common/autotest_common.sh@10 -- $ set +x 00:05:12.854 06:15:44 -- common/autobuild_common.sh@502 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:05:12.854 06:15:44 -- common/autobuild_common.sh@504 -- $ start_monitor_resources 00:05:12.854 06:15:44 -- pm/common@17 -- $ local monitor 00:05:12.854 06:15:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.854 06:15:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.854 06:15:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.854 06:15:44 -- pm/common@21 -- $ date +%s 00:05:12.854 06:15:44 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:12.854 06:15:44 -- pm/common@21 -- $ date +%s 00:05:12.854 06:15:44 -- pm/common@25 -- $ sleep 1 00:05:12.854 06:15:44 -- pm/common@21 -- $ date +%s 00:05:12.854 06:15:44 -- pm/common@21 -- $ date +%s 00:05:12.854 06:15:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079744 00:05:12.854 06:15:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079744 00:05:12.854 06:15:44 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079744 00:05:12.854 06:15:44 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732079744 00:05:12.855 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079744_collect-cpu-load.pm.log 00:05:12.855 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079744_collect-vmstat.pm.log 00:05:12.855 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079744_collect-cpu-temp.pm.log 00:05:12.855 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732079744_collect-bmc-pm.bmc.pm.log 00:05:13.795 06:15:45 -- common/autobuild_common.sh@505 -- $ trap stop_monitor_resources EXIT 00:05:13.795 06:15:45 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:05:13.795 06:15:45 -- spdk/autobuild.sh@12 -- $ umask 022 00:05:13.795 06:15:45 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:13.795 06:15:45 -- spdk/autobuild.sh@16 -- $ date -u 00:05:13.795 Wed Nov 20 05:15:45 AM UTC 2024 00:05:13.795 06:15:45 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:05:13.795 v25.01-pre-195-gecdb65a23 00:05:13.795 06:15:45 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:05:13.795 06:15:45 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:05:13.795 06:15:45 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:05:13.795 06:15:45 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:13.795 06:15:45 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:13.795 06:15:45 -- common/autotest_common.sh@10 -- $ set +x 00:05:13.795 ************************************ 00:05:13.795 START TEST ubsan 00:05:13.795 ************************************ 00:05:13.795 06:15:45 ubsan -- common/autotest_common.sh@1127 -- $ echo 'using ubsan' 00:05:13.795 using ubsan 00:05:13.795 00:05:13.795 real 0m0.000s 00:05:13.795 user 0m0.000s 00:05:13.795 sys 0m0.000s 00:05:13.795 06:15:45 ubsan -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:05:13.795 06:15:45 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:05:13.795 ************************************ 00:05:13.795 END TEST ubsan 00:05:13.795 ************************************ 00:05:13.795 06:15:45 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:05:13.795 06:15:45 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:05:13.795 06:15:45 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:05:13.795 06:15:45 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:05:13.795 06:15:45 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:05:13.795 06:15:45 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:05:13.795 06:15:45 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:05:13.795 06:15:45 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:05:13.795 06:15:45 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:05:14.054 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:05:14.054 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:14.312 Using 'verbs' RDMA provider 00:05:24.869 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:05:34.869 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:05:35.128 Creating mk/config.mk...done. 00:05:35.128 Creating mk/cc.flags.mk...done. 00:05:35.128 Type 'make' to build. 00:05:35.128 06:16:06 -- spdk/autobuild.sh@70 -- $ run_test make make -j48 00:05:35.128 06:16:06 -- common/autotest_common.sh@1103 -- $ '[' 3 -le 1 ']' 00:05:35.128 06:16:06 -- common/autotest_common.sh@1109 -- $ xtrace_disable 00:05:35.128 06:16:06 -- common/autotest_common.sh@10 -- $ set +x 00:05:35.128 ************************************ 00:05:35.128 START TEST make 00:05:35.128 ************************************ 00:05:35.128 06:16:06 make -- common/autotest_common.sh@1127 -- $ make -j48 00:05:35.389 make[1]: Nothing to be done for 'all'. 00:05:37.324 The Meson build system 00:05:37.324 Version: 1.5.0 00:05:37.324 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:05:37.324 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:37.324 Build type: native build 00:05:37.324 Project name: libvfio-user 00:05:37.324 Project version: 0.0.1 00:05:37.324 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:37.324 C linker for the host machine: cc ld.bfd 2.40-14 00:05:37.324 Host machine cpu family: x86_64 00:05:37.324 Host machine cpu: x86_64 00:05:37.324 Run-time dependency threads found: YES 00:05:37.324 Library dl found: YES 00:05:37.324 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:37.324 Run-time dependency json-c found: YES 0.17 00:05:37.324 Run-time dependency cmocka found: YES 1.1.7 00:05:37.324 Program pytest-3 found: NO 00:05:37.324 Program flake8 found: NO 00:05:37.324 Program misspell-fixer found: NO 00:05:37.324 Program restructuredtext-lint found: NO 00:05:37.324 Program valgrind found: YES (/usr/bin/valgrind) 00:05:37.324 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:37.324 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:37.324 Compiler for C supports arguments -Wwrite-strings: YES 00:05:37.324 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:37.324 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:05:37.324 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:05:37.324 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:05:37.324 Build targets in project: 8 00:05:37.324 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:05:37.324 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:05:37.324 00:05:37.324 libvfio-user 0.0.1 00:05:37.324 00:05:37.324 User defined options 00:05:37.324 buildtype : debug 00:05:37.324 default_library: shared 00:05:37.324 libdir : /usr/local/lib 00:05:37.324 00:05:37.324 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:37.894 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:38.157 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:05:38.157 [2/37] Compiling C object samples/lspci.p/lspci.c.o 00:05:38.157 [3/37] Compiling C object samples/null.p/null.c.o 00:05:38.157 [4/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:05:38.157 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:05:38.157 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:05:38.157 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:05:38.157 [8/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:05:38.157 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:05:38.157 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:05:38.419 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:05:38.419 [12/37] Compiling C object test/unit_tests.p/mocks.c.o 00:05:38.419 [13/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:05:38.419 [14/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:05:38.419 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:05:38.419 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:05:38.419 [17/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:05:38.419 [18/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:05:38.419 [19/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:05:38.419 [20/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:05:38.419 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:05:38.419 [22/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:05:38.419 [23/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:05:38.419 [24/37] Compiling C object samples/server.p/server.c.o 00:05:38.419 [25/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:05:38.419 [26/37] Compiling C object samples/client.p/client.c.o 00:05:38.419 [27/37] Linking target samples/client 00:05:38.419 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:05:38.419 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:05:38.680 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:05:38.680 [31/37] Linking target test/unit_tests 00:05:38.680 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:05:38.680 [33/37] Linking target samples/null 00:05:38.680 [34/37] Linking target samples/server 00:05:38.680 [35/37] Linking target samples/lspci 00:05:38.680 [36/37] Linking target samples/gpio-pci-idio-16 00:05:38.680 [37/37] Linking target samples/shadow_ioeventfd_server 00:05:38.942 INFO: autodetecting backend as ninja 00:05:38.942 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:38.942 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:05:39.887 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:05:39.887 ninja: no work to do. 00:05:45.161 The Meson build system 00:05:45.161 Version: 1.5.0 00:05:45.161 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:05:45.161 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:05:45.161 Build type: native build 00:05:45.161 Program cat found: YES (/usr/bin/cat) 00:05:45.161 Project name: DPDK 00:05:45.161 Project version: 24.03.0 00:05:45.161 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:05:45.161 C linker for the host machine: cc ld.bfd 2.40-14 00:05:45.161 Host machine cpu family: x86_64 00:05:45.162 Host machine cpu: x86_64 00:05:45.162 Message: ## Building in Developer Mode ## 00:05:45.162 Program pkg-config found: YES (/usr/bin/pkg-config) 00:05:45.162 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:05:45.162 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:05:45.162 Program python3 found: YES (/usr/bin/python3) 00:05:45.162 Program cat found: YES (/usr/bin/cat) 00:05:45.162 Compiler for C supports arguments -march=native: YES 00:05:45.162 Checking for size of "void *" : 8 00:05:45.162 Checking for size of "void *" : 8 (cached) 00:05:45.162 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:05:45.162 Library m found: YES 00:05:45.162 Library numa found: YES 00:05:45.162 Has header "numaif.h" : YES 00:05:45.162 Library fdt found: NO 00:05:45.162 Library execinfo found: NO 00:05:45.162 Has header "execinfo.h" : YES 00:05:45.162 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:05:45.162 Run-time dependency libarchive found: NO (tried pkgconfig) 00:05:45.162 Run-time dependency libbsd found: NO (tried pkgconfig) 00:05:45.162 Run-time dependency jansson found: NO (tried pkgconfig) 00:05:45.162 Run-time dependency openssl found: YES 3.1.1 00:05:45.162 Run-time dependency libpcap found: YES 1.10.4 00:05:45.162 Has header "pcap.h" with dependency libpcap: YES 00:05:45.162 Compiler for C supports arguments -Wcast-qual: YES 00:05:45.162 Compiler for C supports arguments -Wdeprecated: YES 00:05:45.162 Compiler for C supports arguments -Wformat: YES 00:05:45.162 Compiler for C supports arguments -Wformat-nonliteral: NO 00:05:45.162 Compiler for C supports arguments -Wformat-security: NO 00:05:45.162 Compiler for C supports arguments -Wmissing-declarations: YES 00:05:45.162 Compiler for C supports arguments -Wmissing-prototypes: YES 00:05:45.162 Compiler for C supports arguments -Wnested-externs: YES 00:05:45.162 Compiler for C supports arguments -Wold-style-definition: YES 00:05:45.162 Compiler for C supports arguments -Wpointer-arith: YES 00:05:45.162 Compiler for C supports arguments -Wsign-compare: YES 00:05:45.162 Compiler for C supports arguments -Wstrict-prototypes: YES 00:05:45.162 Compiler for C supports arguments -Wundef: YES 00:05:45.162 Compiler for C supports arguments -Wwrite-strings: YES 00:05:45.162 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:05:45.162 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:05:45.162 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:05:45.162 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:05:45.162 Program objdump found: YES (/usr/bin/objdump) 00:05:45.162 Compiler for C supports arguments -mavx512f: YES 00:05:45.162 Checking if "AVX512 checking" compiles: YES 00:05:45.162 Fetching value of define "__SSE4_2__" : 1 00:05:45.162 Fetching value of define "__AES__" : 1 00:05:45.162 Fetching value of define "__AVX__" : 1 00:05:45.162 Fetching value of define "__AVX2__" : (undefined) 00:05:45.162 Fetching value of define "__AVX512BW__" : (undefined) 00:05:45.162 Fetching value of define "__AVX512CD__" : (undefined) 00:05:45.162 Fetching value of define "__AVX512DQ__" : (undefined) 00:05:45.162 Fetching value of define "__AVX512F__" : (undefined) 00:05:45.162 Fetching value of define "__AVX512VL__" : (undefined) 00:05:45.162 Fetching value of define "__PCLMUL__" : 1 00:05:45.162 Fetching value of define "__RDRND__" : 1 00:05:45.162 Fetching value of define "__RDSEED__" : (undefined) 00:05:45.162 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:05:45.162 Fetching value of define "__znver1__" : (undefined) 00:05:45.162 Fetching value of define "__znver2__" : (undefined) 00:05:45.162 Fetching value of define "__znver3__" : (undefined) 00:05:45.162 Fetching value of define "__znver4__" : (undefined) 00:05:45.162 Compiler for C supports arguments -Wno-format-truncation: YES 00:05:45.162 Message: lib/log: Defining dependency "log" 00:05:45.162 Message: lib/kvargs: Defining dependency "kvargs" 00:05:45.162 Message: lib/telemetry: Defining dependency "telemetry" 00:05:45.162 Checking for function "getentropy" : NO 00:05:45.162 Message: lib/eal: Defining dependency "eal" 00:05:45.162 Message: lib/ring: Defining dependency "ring" 00:05:45.162 Message: lib/rcu: Defining dependency "rcu" 00:05:45.162 Message: lib/mempool: Defining dependency "mempool" 00:05:45.162 Message: lib/mbuf: Defining dependency "mbuf" 00:05:45.162 Fetching value of define "__PCLMUL__" : 1 (cached) 00:05:45.162 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:05:45.162 Compiler for C supports arguments -mpclmul: YES 00:05:45.162 Compiler for C supports arguments -maes: YES 00:05:45.162 Compiler for C supports arguments -mavx512f: YES (cached) 00:05:45.162 Compiler for C supports arguments -mavx512bw: YES 00:05:45.162 Compiler for C supports arguments -mavx512dq: YES 00:05:45.162 Compiler for C supports arguments -mavx512vl: YES 00:05:45.162 Compiler for C supports arguments -mvpclmulqdq: YES 00:05:45.162 Compiler for C supports arguments -mavx2: YES 00:05:45.162 Compiler for C supports arguments -mavx: YES 00:05:45.162 Message: lib/net: Defining dependency "net" 00:05:45.162 Message: lib/meter: Defining dependency "meter" 00:05:45.162 Message: lib/ethdev: Defining dependency "ethdev" 00:05:45.162 Message: lib/pci: Defining dependency "pci" 00:05:45.162 Message: lib/cmdline: Defining dependency "cmdline" 00:05:45.162 Message: lib/hash: Defining dependency "hash" 00:05:45.162 Message: lib/timer: Defining dependency "timer" 00:05:45.162 Message: lib/compressdev: Defining dependency "compressdev" 00:05:45.162 Message: lib/cryptodev: Defining dependency "cryptodev" 00:05:45.162 Message: lib/dmadev: Defining dependency "dmadev" 00:05:45.162 Compiler for C supports arguments -Wno-cast-qual: YES 00:05:45.162 Message: lib/power: Defining dependency "power" 00:05:45.162 Message: lib/reorder: Defining dependency "reorder" 00:05:45.162 Message: lib/security: Defining dependency "security" 00:05:45.162 Has header "linux/userfaultfd.h" : YES 00:05:45.162 Has header "linux/vduse.h" : YES 00:05:45.162 Message: lib/vhost: Defining dependency "vhost" 00:05:45.162 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:05:45.162 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:05:45.162 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:05:45.162 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:05:45.162 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:05:45.162 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:05:45.162 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:05:45.162 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:05:45.162 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:05:45.162 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:05:45.162 Program doxygen found: YES (/usr/local/bin/doxygen) 00:05:45.162 Configuring doxy-api-html.conf using configuration 00:05:45.162 Configuring doxy-api-man.conf using configuration 00:05:45.162 Program mandb found: YES (/usr/bin/mandb) 00:05:45.162 Program sphinx-build found: NO 00:05:45.162 Configuring rte_build_config.h using configuration 00:05:45.162 Message: 00:05:45.162 ================= 00:05:45.162 Applications Enabled 00:05:45.162 ================= 00:05:45.162 00:05:45.162 apps: 00:05:45.162 00:05:45.162 00:05:45.162 Message: 00:05:45.162 ================= 00:05:45.162 Libraries Enabled 00:05:45.162 ================= 00:05:45.162 00:05:45.162 libs: 00:05:45.162 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:05:45.162 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:05:45.162 cryptodev, dmadev, power, reorder, security, vhost, 00:05:45.162 00:05:45.162 Message: 00:05:45.162 =============== 00:05:45.162 Drivers Enabled 00:05:45.162 =============== 00:05:45.162 00:05:45.162 common: 00:05:45.162 00:05:45.162 bus: 00:05:45.162 pci, vdev, 00:05:45.162 mempool: 00:05:45.162 ring, 00:05:45.162 dma: 00:05:45.162 00:05:45.162 net: 00:05:45.162 00:05:45.162 crypto: 00:05:45.162 00:05:45.162 compress: 00:05:45.162 00:05:45.162 vdpa: 00:05:45.162 00:05:45.162 00:05:45.162 Message: 00:05:45.162 ================= 00:05:45.162 Content Skipped 00:05:45.162 ================= 00:05:45.162 00:05:45.162 apps: 00:05:45.162 dumpcap: explicitly disabled via build config 00:05:45.162 graph: explicitly disabled via build config 00:05:45.162 pdump: explicitly disabled via build config 00:05:45.162 proc-info: explicitly disabled via build config 00:05:45.162 test-acl: explicitly disabled via build config 00:05:45.162 test-bbdev: explicitly disabled via build config 00:05:45.162 test-cmdline: explicitly disabled via build config 00:05:45.162 test-compress-perf: explicitly disabled via build config 00:05:45.162 test-crypto-perf: explicitly disabled via build config 00:05:45.162 test-dma-perf: explicitly disabled via build config 00:05:45.162 test-eventdev: explicitly disabled via build config 00:05:45.162 test-fib: explicitly disabled via build config 00:05:45.162 test-flow-perf: explicitly disabled via build config 00:05:45.162 test-gpudev: explicitly disabled via build config 00:05:45.162 test-mldev: explicitly disabled via build config 00:05:45.162 test-pipeline: explicitly disabled via build config 00:05:45.162 test-pmd: explicitly disabled via build config 00:05:45.162 test-regex: explicitly disabled via build config 00:05:45.162 test-sad: explicitly disabled via build config 00:05:45.162 test-security-perf: explicitly disabled via build config 00:05:45.162 00:05:45.162 libs: 00:05:45.162 argparse: explicitly disabled via build config 00:05:45.162 metrics: explicitly disabled via build config 00:05:45.162 acl: explicitly disabled via build config 00:05:45.162 bbdev: explicitly disabled via build config 00:05:45.162 bitratestats: explicitly disabled via build config 00:05:45.162 bpf: explicitly disabled via build config 00:05:45.162 cfgfile: explicitly disabled via build config 00:05:45.163 distributor: explicitly disabled via build config 00:05:45.163 efd: explicitly disabled via build config 00:05:45.163 eventdev: explicitly disabled via build config 00:05:45.163 dispatcher: explicitly disabled via build config 00:05:45.163 gpudev: explicitly disabled via build config 00:05:45.163 gro: explicitly disabled via build config 00:05:45.163 gso: explicitly disabled via build config 00:05:45.163 ip_frag: explicitly disabled via build config 00:05:45.163 jobstats: explicitly disabled via build config 00:05:45.163 latencystats: explicitly disabled via build config 00:05:45.163 lpm: explicitly disabled via build config 00:05:45.163 member: explicitly disabled via build config 00:05:45.163 pcapng: explicitly disabled via build config 00:05:45.163 rawdev: explicitly disabled via build config 00:05:45.163 regexdev: explicitly disabled via build config 00:05:45.163 mldev: explicitly disabled via build config 00:05:45.163 rib: explicitly disabled via build config 00:05:45.163 sched: explicitly disabled via build config 00:05:45.163 stack: explicitly disabled via build config 00:05:45.163 ipsec: explicitly disabled via build config 00:05:45.163 pdcp: explicitly disabled via build config 00:05:45.163 fib: explicitly disabled via build config 00:05:45.163 port: explicitly disabled via build config 00:05:45.163 pdump: explicitly disabled via build config 00:05:45.163 table: explicitly disabled via build config 00:05:45.163 pipeline: explicitly disabled via build config 00:05:45.163 graph: explicitly disabled via build config 00:05:45.163 node: explicitly disabled via build config 00:05:45.163 00:05:45.163 drivers: 00:05:45.163 common/cpt: not in enabled drivers build config 00:05:45.163 common/dpaax: not in enabled drivers build config 00:05:45.163 common/iavf: not in enabled drivers build config 00:05:45.163 common/idpf: not in enabled drivers build config 00:05:45.163 common/ionic: not in enabled drivers build config 00:05:45.163 common/mvep: not in enabled drivers build config 00:05:45.163 common/octeontx: not in enabled drivers build config 00:05:45.163 bus/auxiliary: not in enabled drivers build config 00:05:45.163 bus/cdx: not in enabled drivers build config 00:05:45.163 bus/dpaa: not in enabled drivers build config 00:05:45.163 bus/fslmc: not in enabled drivers build config 00:05:45.163 bus/ifpga: not in enabled drivers build config 00:05:45.163 bus/platform: not in enabled drivers build config 00:05:45.163 bus/uacce: not in enabled drivers build config 00:05:45.163 bus/vmbus: not in enabled drivers build config 00:05:45.163 common/cnxk: not in enabled drivers build config 00:05:45.163 common/mlx5: not in enabled drivers build config 00:05:45.163 common/nfp: not in enabled drivers build config 00:05:45.163 common/nitrox: not in enabled drivers build config 00:05:45.163 common/qat: not in enabled drivers build config 00:05:45.163 common/sfc_efx: not in enabled drivers build config 00:05:45.163 mempool/bucket: not in enabled drivers build config 00:05:45.163 mempool/cnxk: not in enabled drivers build config 00:05:45.163 mempool/dpaa: not in enabled drivers build config 00:05:45.163 mempool/dpaa2: not in enabled drivers build config 00:05:45.163 mempool/octeontx: not in enabled drivers build config 00:05:45.163 mempool/stack: not in enabled drivers build config 00:05:45.163 dma/cnxk: not in enabled drivers build config 00:05:45.163 dma/dpaa: not in enabled drivers build config 00:05:45.163 dma/dpaa2: not in enabled drivers build config 00:05:45.163 dma/hisilicon: not in enabled drivers build config 00:05:45.163 dma/idxd: not in enabled drivers build config 00:05:45.163 dma/ioat: not in enabled drivers build config 00:05:45.163 dma/skeleton: not in enabled drivers build config 00:05:45.163 net/af_packet: not in enabled drivers build config 00:05:45.163 net/af_xdp: not in enabled drivers build config 00:05:45.163 net/ark: not in enabled drivers build config 00:05:45.163 net/atlantic: not in enabled drivers build config 00:05:45.163 net/avp: not in enabled drivers build config 00:05:45.163 net/axgbe: not in enabled drivers build config 00:05:45.163 net/bnx2x: not in enabled drivers build config 00:05:45.163 net/bnxt: not in enabled drivers build config 00:05:45.163 net/bonding: not in enabled drivers build config 00:05:45.163 net/cnxk: not in enabled drivers build config 00:05:45.163 net/cpfl: not in enabled drivers build config 00:05:45.163 net/cxgbe: not in enabled drivers build config 00:05:45.163 net/dpaa: not in enabled drivers build config 00:05:45.163 net/dpaa2: not in enabled drivers build config 00:05:45.163 net/e1000: not in enabled drivers build config 00:05:45.163 net/ena: not in enabled drivers build config 00:05:45.163 net/enetc: not in enabled drivers build config 00:05:45.163 net/enetfec: not in enabled drivers build config 00:05:45.163 net/enic: not in enabled drivers build config 00:05:45.163 net/failsafe: not in enabled drivers build config 00:05:45.163 net/fm10k: not in enabled drivers build config 00:05:45.163 net/gve: not in enabled drivers build config 00:05:45.163 net/hinic: not in enabled drivers build config 00:05:45.163 net/hns3: not in enabled drivers build config 00:05:45.163 net/i40e: not in enabled drivers build config 00:05:45.163 net/iavf: not in enabled drivers build config 00:05:45.163 net/ice: not in enabled drivers build config 00:05:45.163 net/idpf: not in enabled drivers build config 00:05:45.163 net/igc: not in enabled drivers build config 00:05:45.163 net/ionic: not in enabled drivers build config 00:05:45.163 net/ipn3ke: not in enabled drivers build config 00:05:45.163 net/ixgbe: not in enabled drivers build config 00:05:45.163 net/mana: not in enabled drivers build config 00:05:45.163 net/memif: not in enabled drivers build config 00:05:45.163 net/mlx4: not in enabled drivers build config 00:05:45.163 net/mlx5: not in enabled drivers build config 00:05:45.163 net/mvneta: not in enabled drivers build config 00:05:45.163 net/mvpp2: not in enabled drivers build config 00:05:45.163 net/netvsc: not in enabled drivers build config 00:05:45.163 net/nfb: not in enabled drivers build config 00:05:45.163 net/nfp: not in enabled drivers build config 00:05:45.163 net/ngbe: not in enabled drivers build config 00:05:45.163 net/null: not in enabled drivers build config 00:05:45.163 net/octeontx: not in enabled drivers build config 00:05:45.163 net/octeon_ep: not in enabled drivers build config 00:05:45.163 net/pcap: not in enabled drivers build config 00:05:45.163 net/pfe: not in enabled drivers build config 00:05:45.163 net/qede: not in enabled drivers build config 00:05:45.163 net/ring: not in enabled drivers build config 00:05:45.163 net/sfc: not in enabled drivers build config 00:05:45.163 net/softnic: not in enabled drivers build config 00:05:45.163 net/tap: not in enabled drivers build config 00:05:45.163 net/thunderx: not in enabled drivers build config 00:05:45.163 net/txgbe: not in enabled drivers build config 00:05:45.163 net/vdev_netvsc: not in enabled drivers build config 00:05:45.163 net/vhost: not in enabled drivers build config 00:05:45.163 net/virtio: not in enabled drivers build config 00:05:45.163 net/vmxnet3: not in enabled drivers build config 00:05:45.163 raw/*: missing internal dependency, "rawdev" 00:05:45.163 crypto/armv8: not in enabled drivers build config 00:05:45.163 crypto/bcmfs: not in enabled drivers build config 00:05:45.163 crypto/caam_jr: not in enabled drivers build config 00:05:45.163 crypto/ccp: not in enabled drivers build config 00:05:45.163 crypto/cnxk: not in enabled drivers build config 00:05:45.163 crypto/dpaa_sec: not in enabled drivers build config 00:05:45.163 crypto/dpaa2_sec: not in enabled drivers build config 00:05:45.163 crypto/ipsec_mb: not in enabled drivers build config 00:05:45.163 crypto/mlx5: not in enabled drivers build config 00:05:45.163 crypto/mvsam: not in enabled drivers build config 00:05:45.163 crypto/nitrox: not in enabled drivers build config 00:05:45.163 crypto/null: not in enabled drivers build config 00:05:45.163 crypto/octeontx: not in enabled drivers build config 00:05:45.163 crypto/openssl: not in enabled drivers build config 00:05:45.163 crypto/scheduler: not in enabled drivers build config 00:05:45.163 crypto/uadk: not in enabled drivers build config 00:05:45.163 crypto/virtio: not in enabled drivers build config 00:05:45.163 compress/isal: not in enabled drivers build config 00:05:45.163 compress/mlx5: not in enabled drivers build config 00:05:45.163 compress/nitrox: not in enabled drivers build config 00:05:45.163 compress/octeontx: not in enabled drivers build config 00:05:45.163 compress/zlib: not in enabled drivers build config 00:05:45.163 regex/*: missing internal dependency, "regexdev" 00:05:45.163 ml/*: missing internal dependency, "mldev" 00:05:45.163 vdpa/ifc: not in enabled drivers build config 00:05:45.163 vdpa/mlx5: not in enabled drivers build config 00:05:45.163 vdpa/nfp: not in enabled drivers build config 00:05:45.163 vdpa/sfc: not in enabled drivers build config 00:05:45.163 event/*: missing internal dependency, "eventdev" 00:05:45.163 baseband/*: missing internal dependency, "bbdev" 00:05:45.163 gpu/*: missing internal dependency, "gpudev" 00:05:45.163 00:05:45.163 00:05:45.163 Build targets in project: 85 00:05:45.163 00:05:45.163 DPDK 24.03.0 00:05:45.163 00:05:45.163 User defined options 00:05:45.163 buildtype : debug 00:05:45.163 default_library : shared 00:05:45.163 libdir : lib 00:05:45.163 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:05:45.163 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:05:45.163 c_link_args : 00:05:45.163 cpu_instruction_set: native 00:05:45.163 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:05:45.163 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:05:45.163 enable_docs : false 00:05:45.163 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:05:45.163 enable_kmods : false 00:05:45.163 max_lcores : 128 00:05:45.164 tests : false 00:05:45.164 00:05:45.164 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:05:45.164 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:05:45.164 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:05:45.164 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:05:45.164 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:05:45.164 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:05:45.164 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:05:45.164 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:05:45.164 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:05:45.164 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:05:45.164 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:05:45.164 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:05:45.164 [11/268] Linking static target lib/librte_kvargs.a 00:05:45.423 [12/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:05:45.423 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:05:45.423 [14/268] Linking static target lib/librte_log.a 00:05:45.423 [15/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:05:45.423 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:05:45.993 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:05:45.993 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:05:45.993 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:05:45.993 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:05:45.993 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:05:46.257 [22/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:05:46.257 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:05:46.257 [24/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:05:46.257 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:05:46.257 [26/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:05:46.257 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:05:46.257 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:05:46.257 [29/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:05:46.257 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:05:46.257 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:05:46.257 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:05:46.257 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:05:46.257 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:05:46.257 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:05:46.257 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:05:46.257 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:05:46.257 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:05:46.257 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:05:46.257 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:05:46.257 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:05:46.257 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:05:46.257 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:05:46.257 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:05:46.257 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:05:46.257 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:05:46.257 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:05:46.257 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:05:46.257 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:05:46.257 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:05:46.257 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:05:46.257 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:05:46.257 [53/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:05:46.257 [54/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:05:46.257 [55/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:05:46.257 [56/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:05:46.257 [57/268] Linking static target lib/librte_telemetry.a 00:05:46.257 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:05:46.257 [59/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:05:46.516 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:05:46.516 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:05:46.516 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:05:46.516 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:05:46.516 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:05:46.516 [65/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:05:46.516 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:05:46.516 [67/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:05:46.781 [68/268] Linking target lib/librte_log.so.24.1 00:05:46.781 [69/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:05:46.781 [70/268] Linking static target lib/librte_pci.a 00:05:46.781 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:05:47.043 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:05:47.043 [73/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:05:47.043 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:05:47.043 [75/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:05:47.043 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:05:47.043 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:05:47.043 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:05:47.043 [79/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:05:47.043 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:05:47.043 [81/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:05:47.043 [82/268] Linking target lib/librte_kvargs.so.24.1 00:05:47.043 [83/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:05:47.043 [84/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:05:47.043 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:05:47.043 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:05:47.304 [87/268] Linking static target lib/librte_ring.a 00:05:47.304 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:05:47.304 [89/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:05:47.304 [90/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:05:47.304 [91/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:05:47.304 [92/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:05:47.304 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:05:47.304 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:05:47.304 [95/268] Linking static target lib/librte_meter.a 00:05:47.304 [96/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:05:47.304 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:05:47.304 [98/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:05:47.304 [99/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:05:47.304 [100/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.304 [101/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:05:47.304 [102/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:05:47.304 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:05:47.304 [104/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:05:47.304 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:05:47.304 [106/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:05:47.304 [107/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:05:47.304 [108/268] Linking static target lib/librte_eal.a 00:05:47.304 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:05:47.304 [110/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:05:47.304 [111/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:05:47.304 [112/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:05:47.304 [113/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:05:47.304 [114/268] Linking static target lib/librte_mempool.a 00:05:47.566 [115/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:05:47.566 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:05:47.566 [117/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.566 [118/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:05:47.566 [119/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:05:47.566 [120/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:05:47.566 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:05:47.566 [122/268] Linking target lib/librte_telemetry.so.24.1 00:05:47.566 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:05:47.566 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:05:47.566 [125/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:05:47.566 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:05:47.566 [127/268] Linking static target lib/librte_rcu.a 00:05:47.566 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:05:47.566 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:05:47.566 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:05:47.566 [131/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:05:47.566 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:05:47.566 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:05:47.825 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:05:47.825 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:05:47.825 [136/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:05:47.825 [137/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.825 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:05:47.825 [139/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:05:47.825 [140/268] Linking static target lib/librte_net.a 00:05:48.087 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:05:48.087 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:05:48.087 [143/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:05:48.087 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:05:48.087 [145/268] Linking static target lib/librte_cmdline.a 00:05:48.087 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:05:48.087 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:05:48.087 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:05:48.349 [149/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:05:48.349 [150/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:05:48.349 [151/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:05:48.349 [152/268] Linking static target lib/librte_timer.a 00:05:48.349 [153/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:05:48.349 [154/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.349 [155/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:05:48.349 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:05:48.349 [157/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:05:48.349 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:05:48.349 [159/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:05:48.349 [160/268] Linking static target lib/librte_dmadev.a 00:05:48.349 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:05:48.349 [162/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.608 [163/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:05:48.608 [164/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:05:48.608 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:05:48.608 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:05:48.608 [167/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.608 [168/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:05:48.608 [169/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:05:48.608 [170/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:05:48.608 [171/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:05:48.608 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.866 [173/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:05:48.866 [174/268] Linking static target lib/librte_compressdev.a 00:05:48.866 [175/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:05:48.866 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:05:48.866 [177/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:05:48.866 [178/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:05:48.866 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:05:48.866 [180/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:05:48.866 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:05:48.866 [182/268] Linking static target lib/librte_hash.a 00:05:48.866 [183/268] Linking static target lib/librte_power.a 00:05:48.866 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:05:48.866 [185/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:05:48.866 [186/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:05:48.866 [187/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:05:48.866 [188/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:48.866 [189/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:05:48.866 [190/268] Linking static target lib/librte_reorder.a 00:05:49.125 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:05:49.125 [192/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:05:49.125 [193/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:05:49.125 [194/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.125 [195/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:05:49.125 [196/268] Linking static target lib/librte_mbuf.a 00:05:49.125 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:05:49.125 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:05:49.125 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:05:49.125 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:49.125 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:05:49.125 [202/268] Linking static target drivers/librte_bus_vdev.a 00:05:49.125 [203/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.125 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:05:49.382 [205/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:05:49.382 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:49.382 [207/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:05:49.382 [208/268] Linking static target drivers/librte_bus_pci.a 00:05:49.382 [209/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.382 [210/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:05:49.382 [211/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:05:49.382 [212/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.382 [213/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:49.382 [214/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:05:49.382 [215/268] Linking static target drivers/librte_mempool_ring.a 00:05:49.382 [216/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:05:49.382 [217/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.382 [218/268] Linking static target lib/librte_security.a 00:05:49.382 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.382 [220/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:05:49.382 [221/268] Linking static target lib/librte_ethdev.a 00:05:49.639 [222/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:05:49.639 [223/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.639 [224/268] Linking static target lib/librte_cryptodev.a 00:05:49.639 [225/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:05:49.639 [226/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:05:50.569 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:51.943 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:05:53.842 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.842 [230/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:05:53.842 [231/268] Linking target lib/librte_eal.so.24.1 00:05:53.842 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:05:54.100 [233/268] Linking target lib/librte_ring.so.24.1 00:05:54.100 [234/268] Linking target lib/librte_timer.so.24.1 00:05:54.100 [235/268] Linking target lib/librte_meter.so.24.1 00:05:54.100 [236/268] Linking target lib/librte_pci.so.24.1 00:05:54.100 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:05:54.100 [238/268] Linking target lib/librte_dmadev.so.24.1 00:05:54.100 [239/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:05:54.100 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:05:54.100 [241/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:05:54.100 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:05:54.100 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:05:54.100 [244/268] Linking target lib/librte_rcu.so.24.1 00:05:54.100 [245/268] Linking target lib/librte_mempool.so.24.1 00:05:54.100 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:05:54.357 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:05:54.357 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:05:54.357 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:05:54.357 [250/268] Linking target lib/librte_mbuf.so.24.1 00:05:54.357 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:05:54.615 [252/268] Linking target lib/librte_compressdev.so.24.1 00:05:54.615 [253/268] Linking target lib/librte_reorder.so.24.1 00:05:54.615 [254/268] Linking target lib/librte_net.so.24.1 00:05:54.615 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:05:54.615 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:05:54.615 [257/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:05:54.615 [258/268] Linking target lib/librte_security.so.24.1 00:05:54.615 [259/268] Linking target lib/librte_hash.so.24.1 00:05:54.615 [260/268] Linking target lib/librte_cmdline.so.24.1 00:05:54.615 [261/268] Linking target lib/librte_ethdev.so.24.1 00:05:54.872 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:05:54.872 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:05:54.872 [264/268] Linking target lib/librte_power.so.24.1 00:05:58.153 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:05:58.153 [266/268] Linking static target lib/librte_vhost.a 00:05:58.719 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:05:58.719 [268/268] Linking target lib/librte_vhost.so.24.1 00:05:58.719 INFO: autodetecting backend as ninja 00:05:58.719 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:06:20.715 CC lib/log/log.o 00:06:20.715 CC lib/log/log_flags.o 00:06:20.715 CC lib/log/log_deprecated.o 00:06:20.715 CC lib/ut/ut.o 00:06:20.715 CC lib/ut_mock/mock.o 00:06:20.715 LIB libspdk_ut_mock.a 00:06:20.715 LIB libspdk_ut.a 00:06:20.715 LIB libspdk_log.a 00:06:20.715 SO libspdk_ut_mock.so.6.0 00:06:20.715 SO libspdk_ut.so.2.0 00:06:20.715 SO libspdk_log.so.7.1 00:06:20.715 SYMLINK libspdk_ut.so 00:06:20.715 SYMLINK libspdk_ut_mock.so 00:06:20.715 SYMLINK libspdk_log.so 00:06:20.715 CC lib/dma/dma.o 00:06:20.715 CC lib/ioat/ioat.o 00:06:20.715 CXX lib/trace_parser/trace.o 00:06:20.715 CC lib/util/base64.o 00:06:20.715 CC lib/util/bit_array.o 00:06:20.715 CC lib/util/cpuset.o 00:06:20.715 CC lib/util/crc16.o 00:06:20.715 CC lib/util/crc32.o 00:06:20.715 CC lib/util/crc32c.o 00:06:20.715 CC lib/util/crc32_ieee.o 00:06:20.715 CC lib/util/crc64.o 00:06:20.715 CC lib/util/dif.o 00:06:20.715 CC lib/util/fd.o 00:06:20.715 CC lib/util/fd_group.o 00:06:20.715 CC lib/util/file.o 00:06:20.715 CC lib/util/hexlify.o 00:06:20.715 CC lib/util/iov.o 00:06:20.715 CC lib/util/math.o 00:06:20.715 CC lib/util/net.o 00:06:20.715 CC lib/util/pipe.o 00:06:20.715 CC lib/util/strerror_tls.o 00:06:20.715 CC lib/util/string.o 00:06:20.715 CC lib/util/uuid.o 00:06:20.715 CC lib/util/xor.o 00:06:20.715 CC lib/util/md5.o 00:06:20.715 CC lib/util/zipf.o 00:06:20.715 CC lib/vfio_user/host/vfio_user_pci.o 00:06:20.715 CC lib/vfio_user/host/vfio_user.o 00:06:20.715 LIB libspdk_dma.a 00:06:20.715 SO libspdk_dma.so.5.0 00:06:20.715 SYMLINK libspdk_dma.so 00:06:20.715 LIB libspdk_ioat.a 00:06:20.715 SO libspdk_ioat.so.7.0 00:06:20.715 SYMLINK libspdk_ioat.so 00:06:20.715 LIB libspdk_vfio_user.a 00:06:20.715 SO libspdk_vfio_user.so.5.0 00:06:20.715 SYMLINK libspdk_vfio_user.so 00:06:20.715 LIB libspdk_util.a 00:06:20.715 SO libspdk_util.so.10.1 00:06:20.715 SYMLINK libspdk_util.so 00:06:20.715 CC lib/rdma_utils/rdma_utils.o 00:06:20.715 CC lib/idxd/idxd.o 00:06:20.715 CC lib/idxd/idxd_user.o 00:06:20.715 CC lib/vmd/vmd.o 00:06:20.715 CC lib/idxd/idxd_kernel.o 00:06:20.715 CC lib/conf/conf.o 00:06:20.715 CC lib/json/json_parse.o 00:06:20.715 CC lib/env_dpdk/env.o 00:06:20.715 CC lib/vmd/led.o 00:06:20.715 CC lib/env_dpdk/memory.o 00:06:20.715 CC lib/json/json_util.o 00:06:20.715 CC lib/json/json_write.o 00:06:20.715 CC lib/env_dpdk/pci.o 00:06:20.715 CC lib/env_dpdk/init.o 00:06:20.715 CC lib/env_dpdk/threads.o 00:06:20.715 CC lib/env_dpdk/pci_ioat.o 00:06:20.715 CC lib/env_dpdk/pci_virtio.o 00:06:20.715 CC lib/env_dpdk/pci_vmd.o 00:06:20.715 CC lib/env_dpdk/pci_idxd.o 00:06:20.715 CC lib/env_dpdk/pci_event.o 00:06:20.715 CC lib/env_dpdk/sigbus_handler.o 00:06:20.715 CC lib/env_dpdk/pci_dpdk.o 00:06:20.715 CC lib/env_dpdk/pci_dpdk_2207.o 00:06:20.715 CC lib/env_dpdk/pci_dpdk_2211.o 00:06:20.715 LIB libspdk_trace_parser.a 00:06:20.715 SO libspdk_trace_parser.so.6.0 00:06:20.715 SYMLINK libspdk_trace_parser.so 00:06:20.715 LIB libspdk_conf.a 00:06:20.715 SO libspdk_conf.so.6.0 00:06:20.715 LIB libspdk_rdma_utils.a 00:06:20.715 LIB libspdk_json.a 00:06:20.715 SO libspdk_rdma_utils.so.1.0 00:06:20.715 SO libspdk_json.so.6.0 00:06:20.715 SYMLINK libspdk_conf.so 00:06:20.715 SYMLINK libspdk_rdma_utils.so 00:06:20.715 SYMLINK libspdk_json.so 00:06:20.715 CC lib/rdma_provider/common.o 00:06:20.715 CC lib/rdma_provider/rdma_provider_verbs.o 00:06:20.715 CC lib/jsonrpc/jsonrpc_server.o 00:06:20.715 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:06:20.715 CC lib/jsonrpc/jsonrpc_client.o 00:06:20.715 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:06:20.715 LIB libspdk_idxd.a 00:06:20.715 LIB libspdk_vmd.a 00:06:20.715 SO libspdk_idxd.so.12.1 00:06:20.715 SO libspdk_vmd.so.6.0 00:06:20.715 SYMLINK libspdk_idxd.so 00:06:20.715 SYMLINK libspdk_vmd.so 00:06:20.715 LIB libspdk_rdma_provider.a 00:06:20.715 SO libspdk_rdma_provider.so.7.0 00:06:20.715 LIB libspdk_jsonrpc.a 00:06:20.715 SO libspdk_jsonrpc.so.6.0 00:06:20.715 SYMLINK libspdk_rdma_provider.so 00:06:20.715 SYMLINK libspdk_jsonrpc.so 00:06:20.715 CC lib/rpc/rpc.o 00:06:20.715 LIB libspdk_rpc.a 00:06:20.715 SO libspdk_rpc.so.6.0 00:06:20.715 SYMLINK libspdk_rpc.so 00:06:20.974 CC lib/trace/trace.o 00:06:20.974 CC lib/trace/trace_flags.o 00:06:20.974 CC lib/trace/trace_rpc.o 00:06:20.974 CC lib/notify/notify.o 00:06:20.974 CC lib/keyring/keyring.o 00:06:20.974 CC lib/notify/notify_rpc.o 00:06:20.974 CC lib/keyring/keyring_rpc.o 00:06:20.974 LIB libspdk_notify.a 00:06:20.974 SO libspdk_notify.so.6.0 00:06:21.232 LIB libspdk_keyring.a 00:06:21.232 SYMLINK libspdk_notify.so 00:06:21.232 LIB libspdk_trace.a 00:06:21.232 SO libspdk_keyring.so.2.0 00:06:21.232 SO libspdk_trace.so.11.0 00:06:21.232 SYMLINK libspdk_keyring.so 00:06:21.232 SYMLINK libspdk_trace.so 00:06:21.490 CC lib/thread/thread.o 00:06:21.490 CC lib/thread/iobuf.o 00:06:21.490 CC lib/sock/sock.o 00:06:21.490 CC lib/sock/sock_rpc.o 00:06:21.490 LIB libspdk_env_dpdk.a 00:06:21.490 SO libspdk_env_dpdk.so.15.1 00:06:21.490 SYMLINK libspdk_env_dpdk.so 00:06:21.748 LIB libspdk_sock.a 00:06:21.748 SO libspdk_sock.so.10.0 00:06:21.748 SYMLINK libspdk_sock.so 00:06:22.006 CC lib/nvme/nvme_ctrlr_cmd.o 00:06:22.006 CC lib/nvme/nvme_ctrlr.o 00:06:22.006 CC lib/nvme/nvme_fabric.o 00:06:22.006 CC lib/nvme/nvme_ns_cmd.o 00:06:22.006 CC lib/nvme/nvme_ns.o 00:06:22.006 CC lib/nvme/nvme_pcie_common.o 00:06:22.007 CC lib/nvme/nvme_pcie.o 00:06:22.007 CC lib/nvme/nvme_qpair.o 00:06:22.007 CC lib/nvme/nvme.o 00:06:22.007 CC lib/nvme/nvme_quirks.o 00:06:22.007 CC lib/nvme/nvme_transport.o 00:06:22.007 CC lib/nvme/nvme_discovery.o 00:06:22.007 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:06:22.007 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:06:22.007 CC lib/nvme/nvme_tcp.o 00:06:22.007 CC lib/nvme/nvme_opal.o 00:06:22.007 CC lib/nvme/nvme_io_msg.o 00:06:22.007 CC lib/nvme/nvme_poll_group.o 00:06:22.007 CC lib/nvme/nvme_zns.o 00:06:22.007 CC lib/nvme/nvme_stubs.o 00:06:22.007 CC lib/nvme/nvme_auth.o 00:06:22.007 CC lib/nvme/nvme_cuse.o 00:06:22.007 CC lib/nvme/nvme_vfio_user.o 00:06:22.007 CC lib/nvme/nvme_rdma.o 00:06:22.942 LIB libspdk_thread.a 00:06:22.942 SO libspdk_thread.so.11.0 00:06:22.942 SYMLINK libspdk_thread.so 00:06:23.201 CC lib/accel/accel.o 00:06:23.201 CC lib/fsdev/fsdev.o 00:06:23.201 CC lib/blob/blobstore.o 00:06:23.201 CC lib/vfu_tgt/tgt_endpoint.o 00:06:23.201 CC lib/init/json_config.o 00:06:23.201 CC lib/accel/accel_rpc.o 00:06:23.201 CC lib/blob/request.o 00:06:23.201 CC lib/fsdev/fsdev_io.o 00:06:23.201 CC lib/vfu_tgt/tgt_rpc.o 00:06:23.201 CC lib/init/subsystem.o 00:06:23.201 CC lib/virtio/virtio.o 00:06:23.201 CC lib/virtio/virtio_vhost_user.o 00:06:23.201 CC lib/blob/zeroes.o 00:06:23.201 CC lib/accel/accel_sw.o 00:06:23.201 CC lib/fsdev/fsdev_rpc.o 00:06:23.201 CC lib/init/subsystem_rpc.o 00:06:23.201 CC lib/virtio/virtio_vfio_user.o 00:06:23.201 CC lib/blob/blob_bs_dev.o 00:06:23.201 CC lib/init/rpc.o 00:06:23.201 CC lib/virtio/virtio_pci.o 00:06:23.459 LIB libspdk_init.a 00:06:23.459 SO libspdk_init.so.6.0 00:06:23.718 LIB libspdk_vfu_tgt.a 00:06:23.718 SYMLINK libspdk_init.so 00:06:23.718 SO libspdk_vfu_tgt.so.3.0 00:06:23.718 LIB libspdk_virtio.a 00:06:23.718 SYMLINK libspdk_vfu_tgt.so 00:06:23.718 SO libspdk_virtio.so.7.0 00:06:23.718 SYMLINK libspdk_virtio.so 00:06:23.718 CC lib/event/app.o 00:06:23.718 CC lib/event/reactor.o 00:06:23.718 CC lib/event/log_rpc.o 00:06:23.718 CC lib/event/app_rpc.o 00:06:23.718 CC lib/event/scheduler_static.o 00:06:23.976 LIB libspdk_fsdev.a 00:06:23.976 SO libspdk_fsdev.so.2.0 00:06:23.976 SYMLINK libspdk_fsdev.so 00:06:24.234 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:06:24.234 LIB libspdk_event.a 00:06:24.234 SO libspdk_event.so.14.0 00:06:24.234 SYMLINK libspdk_event.so 00:06:24.492 LIB libspdk_accel.a 00:06:24.492 SO libspdk_accel.so.16.0 00:06:24.492 SYMLINK libspdk_accel.so 00:06:24.492 LIB libspdk_nvme.a 00:06:24.750 CC lib/bdev/bdev.o 00:06:24.750 CC lib/bdev/bdev_rpc.o 00:06:24.750 CC lib/bdev/bdev_zone.o 00:06:24.750 CC lib/bdev/part.o 00:06:24.750 CC lib/bdev/scsi_nvme.o 00:06:24.750 SO libspdk_nvme.so.15.0 00:06:24.750 LIB libspdk_fuse_dispatcher.a 00:06:24.750 SO libspdk_fuse_dispatcher.so.1.0 00:06:25.008 SYMLINK libspdk_nvme.so 00:06:25.008 SYMLINK libspdk_fuse_dispatcher.so 00:06:26.381 LIB libspdk_blob.a 00:06:26.381 SO libspdk_blob.so.11.0 00:06:26.381 SYMLINK libspdk_blob.so 00:06:26.638 CC lib/blobfs/blobfs.o 00:06:26.638 CC lib/blobfs/tree.o 00:06:26.638 CC lib/lvol/lvol.o 00:06:27.204 LIB libspdk_bdev.a 00:06:27.204 SO libspdk_bdev.so.17.0 00:06:27.471 SYMLINK libspdk_bdev.so 00:06:27.471 LIB libspdk_blobfs.a 00:06:27.471 SO libspdk_blobfs.so.10.0 00:06:27.471 SYMLINK libspdk_blobfs.so 00:06:27.471 CC lib/scsi/dev.o 00:06:27.471 CC lib/nbd/nbd.o 00:06:27.471 CC lib/nbd/nbd_rpc.o 00:06:27.471 CC lib/scsi/lun.o 00:06:27.472 CC lib/ftl/ftl_core.o 00:06:27.472 CC lib/scsi/port.o 00:06:27.472 CC lib/ftl/ftl_init.o 00:06:27.472 CC lib/scsi/scsi.o 00:06:27.472 CC lib/ftl/ftl_layout.o 00:06:27.472 CC lib/ublk/ublk.o 00:06:27.472 CC lib/ftl/ftl_debug.o 00:06:27.472 CC lib/scsi/scsi_pr.o 00:06:27.472 CC lib/ublk/ublk_rpc.o 00:06:27.472 CC lib/nvmf/ctrlr_discovery.o 00:06:27.472 CC lib/scsi/scsi_bdev.o 00:06:27.472 CC lib/ftl/ftl_io.o 00:06:27.472 CC lib/scsi/scsi_rpc.o 00:06:27.472 CC lib/nvmf/ctrlr.o 00:06:27.472 CC lib/nvmf/ctrlr_bdev.o 00:06:27.472 CC lib/ftl/ftl_sb.o 00:06:27.472 CC lib/scsi/task.o 00:06:27.472 CC lib/ftl/ftl_l2p.o 00:06:27.472 CC lib/nvmf/subsystem.o 00:06:27.472 CC lib/nvmf/nvmf.o 00:06:27.472 CC lib/ftl/ftl_l2p_flat.o 00:06:27.472 CC lib/nvmf/nvmf_rpc.o 00:06:27.472 CC lib/ftl/ftl_band.o 00:06:27.472 CC lib/ftl/ftl_nv_cache.o 00:06:27.472 CC lib/nvmf/transport.o 00:06:27.472 CC lib/nvmf/tcp.o 00:06:27.472 CC lib/ftl/ftl_band_ops.o 00:06:27.472 CC lib/nvmf/stubs.o 00:06:27.472 CC lib/ftl/ftl_writer.o 00:06:27.472 CC lib/nvmf/mdns_server.o 00:06:27.472 CC lib/ftl/ftl_rq.o 00:06:27.472 CC lib/nvmf/vfio_user.o 00:06:27.472 CC lib/ftl/ftl_reloc.o 00:06:27.472 CC lib/nvmf/rdma.o 00:06:27.472 CC lib/ftl/ftl_l2p_cache.o 00:06:27.472 CC lib/nvmf/auth.o 00:06:27.472 CC lib/ftl/ftl_p2l.o 00:06:27.472 CC lib/ftl/ftl_p2l_log.o 00:06:27.472 CC lib/ftl/mngt/ftl_mngt.o 00:06:27.472 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:06:27.472 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:06:27.472 CC lib/ftl/mngt/ftl_mngt_startup.o 00:06:27.472 CC lib/ftl/mngt/ftl_mngt_md.o 00:06:27.472 LIB libspdk_lvol.a 00:06:27.737 SO libspdk_lvol.so.10.0 00:06:27.737 SYMLINK libspdk_lvol.so 00:06:27.737 CC lib/ftl/mngt/ftl_mngt_misc.o 00:06:28.003 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:06:28.003 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:06:28.003 CC lib/ftl/mngt/ftl_mngt_band.o 00:06:28.003 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:06:28.003 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:06:28.003 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:06:28.003 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:06:28.003 CC lib/ftl/utils/ftl_conf.o 00:06:28.003 CC lib/ftl/utils/ftl_md.o 00:06:28.003 CC lib/ftl/utils/ftl_mempool.o 00:06:28.003 CC lib/ftl/utils/ftl_bitmap.o 00:06:28.003 CC lib/ftl/utils/ftl_property.o 00:06:28.003 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:06:28.003 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:06:28.003 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:06:28.003 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:06:28.003 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:06:28.003 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:06:28.261 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:06:28.261 CC lib/ftl/upgrade/ftl_sb_v3.o 00:06:28.261 CC lib/ftl/upgrade/ftl_sb_v5.o 00:06:28.261 CC lib/ftl/nvc/ftl_nvc_dev.o 00:06:28.261 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:06:28.261 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:06:28.261 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:06:28.261 CC lib/ftl/base/ftl_base_dev.o 00:06:28.261 CC lib/ftl/ftl_trace.o 00:06:28.261 CC lib/ftl/base/ftl_base_bdev.o 00:06:28.521 LIB libspdk_nbd.a 00:06:28.521 SO libspdk_nbd.so.7.0 00:06:28.521 LIB libspdk_scsi.a 00:06:28.521 SYMLINK libspdk_nbd.so 00:06:28.521 SO libspdk_scsi.so.9.0 00:06:28.521 SYMLINK libspdk_scsi.so 00:06:28.779 LIB libspdk_ublk.a 00:06:28.779 SO libspdk_ublk.so.3.0 00:06:28.779 SYMLINK libspdk_ublk.so 00:06:28.779 CC lib/iscsi/conn.o 00:06:28.779 CC lib/vhost/vhost.o 00:06:28.779 CC lib/vhost/vhost_rpc.o 00:06:28.779 CC lib/iscsi/init_grp.o 00:06:28.779 CC lib/iscsi/iscsi.o 00:06:28.779 CC lib/iscsi/param.o 00:06:28.779 CC lib/vhost/vhost_scsi.o 00:06:28.779 CC lib/iscsi/portal_grp.o 00:06:28.779 CC lib/iscsi/tgt_node.o 00:06:28.779 CC lib/vhost/vhost_blk.o 00:06:28.779 CC lib/iscsi/iscsi_subsystem.o 00:06:28.779 CC lib/iscsi/iscsi_rpc.o 00:06:28.779 CC lib/vhost/rte_vhost_user.o 00:06:28.779 CC lib/iscsi/task.o 00:06:29.037 LIB libspdk_ftl.a 00:06:29.295 SO libspdk_ftl.so.9.0 00:06:29.554 SYMLINK libspdk_ftl.so 00:06:30.121 LIB libspdk_vhost.a 00:06:30.121 SO libspdk_vhost.so.8.0 00:06:30.121 SYMLINK libspdk_vhost.so 00:06:30.379 LIB libspdk_iscsi.a 00:06:30.379 LIB libspdk_nvmf.a 00:06:30.379 SO libspdk_iscsi.so.8.0 00:06:30.379 SO libspdk_nvmf.so.20.0 00:06:30.379 SYMLINK libspdk_iscsi.so 00:06:30.637 SYMLINK libspdk_nvmf.so 00:06:30.896 CC module/env_dpdk/env_dpdk_rpc.o 00:06:30.896 CC module/vfu_device/vfu_virtio.o 00:06:30.896 CC module/vfu_device/vfu_virtio_blk.o 00:06:30.896 CC module/vfu_device/vfu_virtio_scsi.o 00:06:30.896 CC module/vfu_device/vfu_virtio_rpc.o 00:06:30.896 CC module/vfu_device/vfu_virtio_fs.o 00:06:30.896 CC module/sock/posix/posix.o 00:06:30.896 CC module/accel/ioat/accel_ioat.o 00:06:30.896 CC module/accel/ioat/accel_ioat_rpc.o 00:06:30.896 CC module/accel/iaa/accel_iaa.o 00:06:30.896 CC module/accel/error/accel_error.o 00:06:30.896 CC module/accel/iaa/accel_iaa_rpc.o 00:06:30.896 CC module/blob/bdev/blob_bdev.o 00:06:30.896 CC module/scheduler/gscheduler/gscheduler.o 00:06:30.896 CC module/keyring/linux/keyring.o 00:06:30.896 CC module/accel/error/accel_error_rpc.o 00:06:30.896 CC module/scheduler/dynamic/scheduler_dynamic.o 00:06:30.896 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:06:30.896 CC module/keyring/linux/keyring_rpc.o 00:06:30.896 CC module/fsdev/aio/fsdev_aio.o 00:06:30.896 CC module/fsdev/aio/fsdev_aio_rpc.o 00:06:30.896 CC module/fsdev/aio/linux_aio_mgr.o 00:06:30.896 CC module/accel/dsa/accel_dsa.o 00:06:30.896 CC module/accel/dsa/accel_dsa_rpc.o 00:06:30.896 CC module/keyring/file/keyring_rpc.o 00:06:30.896 CC module/keyring/file/keyring.o 00:06:30.896 LIB libspdk_env_dpdk_rpc.a 00:06:30.896 SO libspdk_env_dpdk_rpc.so.6.0 00:06:31.155 SYMLINK libspdk_env_dpdk_rpc.so 00:06:31.155 LIB libspdk_scheduler_dpdk_governor.a 00:06:31.155 SO libspdk_scheduler_dpdk_governor.so.4.0 00:06:31.155 LIB libspdk_accel_ioat.a 00:06:31.155 LIB libspdk_accel_error.a 00:06:31.155 LIB libspdk_scheduler_dynamic.a 00:06:31.155 SO libspdk_accel_ioat.so.6.0 00:06:31.155 LIB libspdk_accel_iaa.a 00:06:31.155 LIB libspdk_keyring_linux.a 00:06:31.155 SO libspdk_scheduler_dynamic.so.4.0 00:06:31.155 SO libspdk_accel_error.so.2.0 00:06:31.155 LIB libspdk_keyring_file.a 00:06:31.155 SYMLINK libspdk_scheduler_dpdk_governor.so 00:06:31.155 SO libspdk_accel_iaa.so.3.0 00:06:31.155 LIB libspdk_scheduler_gscheduler.a 00:06:31.155 SO libspdk_keyring_linux.so.1.0 00:06:31.155 SO libspdk_keyring_file.so.2.0 00:06:31.155 SO libspdk_scheduler_gscheduler.so.4.0 00:06:31.155 SYMLINK libspdk_accel_ioat.so 00:06:31.155 SYMLINK libspdk_scheduler_dynamic.so 00:06:31.155 SYMLINK libspdk_accel_error.so 00:06:31.155 LIB libspdk_blob_bdev.a 00:06:31.155 SYMLINK libspdk_accel_iaa.so 00:06:31.155 SYMLINK libspdk_keyring_linux.so 00:06:31.155 LIB libspdk_accel_dsa.a 00:06:31.155 SYMLINK libspdk_keyring_file.so 00:06:31.155 SYMLINK libspdk_scheduler_gscheduler.so 00:06:31.155 SO libspdk_blob_bdev.so.11.0 00:06:31.155 SO libspdk_accel_dsa.so.5.0 00:06:31.413 SYMLINK libspdk_blob_bdev.so 00:06:31.413 SYMLINK libspdk_accel_dsa.so 00:06:31.413 LIB libspdk_vfu_device.a 00:06:31.680 CC module/bdev/gpt/gpt.o 00:06:31.680 CC module/bdev/lvol/vbdev_lvol.o 00:06:31.680 CC module/blobfs/bdev/blobfs_bdev.o 00:06:31.680 CC module/bdev/gpt/vbdev_gpt.o 00:06:31.680 CC module/bdev/passthru/vbdev_passthru.o 00:06:31.680 CC module/bdev/malloc/bdev_malloc.o 00:06:31.680 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:06:31.680 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:06:31.680 CC module/bdev/malloc/bdev_malloc_rpc.o 00:06:31.680 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:06:31.680 CC module/bdev/raid/bdev_raid.o 00:06:31.680 CC module/bdev/delay/vbdev_delay.o 00:06:31.680 CC module/bdev/error/vbdev_error.o 00:06:31.680 CC module/bdev/null/bdev_null.o 00:06:31.680 CC module/bdev/delay/vbdev_delay_rpc.o 00:06:31.680 CC module/bdev/null/bdev_null_rpc.o 00:06:31.680 CC module/bdev/raid/bdev_raid_rpc.o 00:06:31.680 CC module/bdev/error/vbdev_error_rpc.o 00:06:31.680 CC module/bdev/raid/bdev_raid_sb.o 00:06:31.680 CC module/bdev/aio/bdev_aio.o 00:06:31.680 CC module/bdev/aio/bdev_aio_rpc.o 00:06:31.680 SO libspdk_vfu_device.so.3.0 00:06:31.680 CC module/bdev/raid/raid1.o 00:06:31.680 CC module/bdev/raid/raid0.o 00:06:31.680 CC module/bdev/raid/concat.o 00:06:31.680 CC module/bdev/split/vbdev_split.o 00:06:31.680 CC module/bdev/split/vbdev_split_rpc.o 00:06:31.680 CC module/bdev/nvme/bdev_nvme.o 00:06:31.680 CC module/bdev/virtio/bdev_virtio_scsi.o 00:06:31.680 CC module/bdev/nvme/bdev_nvme_rpc.o 00:06:31.680 CC module/bdev/iscsi/bdev_iscsi.o 00:06:31.680 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:06:31.680 CC module/bdev/virtio/bdev_virtio_blk.o 00:06:31.680 CC module/bdev/nvme/nvme_rpc.o 00:06:31.680 CC module/bdev/ftl/bdev_ftl.o 00:06:31.680 CC module/bdev/virtio/bdev_virtio_rpc.o 00:06:31.680 CC module/bdev/zone_block/vbdev_zone_block.o 00:06:31.680 CC module/bdev/nvme/bdev_mdns_client.o 00:06:31.680 CC module/bdev/nvme/vbdev_opal.o 00:06:31.680 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:06:31.680 CC module/bdev/ftl/bdev_ftl_rpc.o 00:06:31.680 CC module/bdev/nvme/vbdev_opal_rpc.o 00:06:31.680 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:06:31.680 SYMLINK libspdk_vfu_device.so 00:06:31.680 LIB libspdk_fsdev_aio.a 00:06:31.939 SO libspdk_fsdev_aio.so.1.0 00:06:31.939 LIB libspdk_sock_posix.a 00:06:31.939 SO libspdk_sock_posix.so.6.0 00:06:31.939 SYMLINK libspdk_fsdev_aio.so 00:06:31.939 LIB libspdk_blobfs_bdev.a 00:06:31.939 SYMLINK libspdk_sock_posix.so 00:06:31.939 SO libspdk_blobfs_bdev.so.6.0 00:06:31.939 LIB libspdk_bdev_null.a 00:06:31.939 LIB libspdk_bdev_split.a 00:06:31.939 SO libspdk_bdev_split.so.6.0 00:06:31.939 SO libspdk_bdev_null.so.6.0 00:06:31.939 SYMLINK libspdk_blobfs_bdev.so 00:06:31.939 LIB libspdk_bdev_gpt.a 00:06:32.197 SYMLINK libspdk_bdev_split.so 00:06:32.197 SO libspdk_bdev_gpt.so.6.0 00:06:32.197 SYMLINK libspdk_bdev_null.so 00:06:32.197 LIB libspdk_bdev_error.a 00:06:32.197 LIB libspdk_bdev_ftl.a 00:06:32.197 SO libspdk_bdev_error.so.6.0 00:06:32.197 SO libspdk_bdev_ftl.so.6.0 00:06:32.197 LIB libspdk_bdev_zone_block.a 00:06:32.197 SYMLINK libspdk_bdev_gpt.so 00:06:32.197 LIB libspdk_bdev_passthru.a 00:06:32.197 SO libspdk_bdev_zone_block.so.6.0 00:06:32.197 LIB libspdk_bdev_iscsi.a 00:06:32.197 LIB libspdk_bdev_delay.a 00:06:32.197 LIB libspdk_bdev_malloc.a 00:06:32.197 SO libspdk_bdev_passthru.so.6.0 00:06:32.197 SYMLINK libspdk_bdev_ftl.so 00:06:32.197 SYMLINK libspdk_bdev_error.so 00:06:32.197 SO libspdk_bdev_iscsi.so.6.0 00:06:32.197 LIB libspdk_bdev_aio.a 00:06:32.197 SO libspdk_bdev_delay.so.6.0 00:06:32.197 SO libspdk_bdev_malloc.so.6.0 00:06:32.197 SO libspdk_bdev_aio.so.6.0 00:06:32.197 SYMLINK libspdk_bdev_zone_block.so 00:06:32.197 SYMLINK libspdk_bdev_passthru.so 00:06:32.197 SYMLINK libspdk_bdev_iscsi.so 00:06:32.197 SYMLINK libspdk_bdev_delay.so 00:06:32.197 SYMLINK libspdk_bdev_malloc.so 00:06:32.197 SYMLINK libspdk_bdev_aio.so 00:06:32.197 LIB libspdk_bdev_lvol.a 00:06:32.455 SO libspdk_bdev_lvol.so.6.0 00:06:32.455 LIB libspdk_bdev_virtio.a 00:06:32.456 SO libspdk_bdev_virtio.so.6.0 00:06:32.456 SYMLINK libspdk_bdev_lvol.so 00:06:32.456 SYMLINK libspdk_bdev_virtio.so 00:06:32.713 LIB libspdk_bdev_raid.a 00:06:32.971 SO libspdk_bdev_raid.so.6.0 00:06:32.971 SYMLINK libspdk_bdev_raid.so 00:06:34.349 LIB libspdk_bdev_nvme.a 00:06:34.349 SO libspdk_bdev_nvme.so.7.1 00:06:34.349 SYMLINK libspdk_bdev_nvme.so 00:06:34.608 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:06:34.608 CC module/event/subsystems/scheduler/scheduler.o 00:06:34.608 CC module/event/subsystems/iobuf/iobuf.o 00:06:34.608 CC module/event/subsystems/keyring/keyring.o 00:06:34.608 CC module/event/subsystems/vmd/vmd.o 00:06:34.608 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:06:34.608 CC module/event/subsystems/fsdev/fsdev.o 00:06:34.608 CC module/event/subsystems/vmd/vmd_rpc.o 00:06:34.608 CC module/event/subsystems/sock/sock.o 00:06:34.608 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:06:34.866 LIB libspdk_event_keyring.a 00:06:34.866 LIB libspdk_event_vhost_blk.a 00:06:34.866 LIB libspdk_event_fsdev.a 00:06:34.866 LIB libspdk_event_vfu_tgt.a 00:06:34.866 LIB libspdk_event_scheduler.a 00:06:34.866 LIB libspdk_event_vmd.a 00:06:34.866 LIB libspdk_event_sock.a 00:06:34.866 SO libspdk_event_keyring.so.1.0 00:06:34.866 SO libspdk_event_vhost_blk.so.3.0 00:06:34.866 LIB libspdk_event_iobuf.a 00:06:34.866 SO libspdk_event_fsdev.so.1.0 00:06:34.866 SO libspdk_event_vfu_tgt.so.3.0 00:06:34.866 SO libspdk_event_scheduler.so.4.0 00:06:34.866 SO libspdk_event_sock.so.5.0 00:06:34.866 SO libspdk_event_vmd.so.6.0 00:06:34.866 SO libspdk_event_iobuf.so.3.0 00:06:34.866 SYMLINK libspdk_event_keyring.so 00:06:34.866 SYMLINK libspdk_event_vhost_blk.so 00:06:34.866 SYMLINK libspdk_event_fsdev.so 00:06:34.866 SYMLINK libspdk_event_vfu_tgt.so 00:06:34.866 SYMLINK libspdk_event_scheduler.so 00:06:34.866 SYMLINK libspdk_event_sock.so 00:06:34.866 SYMLINK libspdk_event_vmd.so 00:06:34.866 SYMLINK libspdk_event_iobuf.so 00:06:35.124 CC module/event/subsystems/accel/accel.o 00:06:35.383 LIB libspdk_event_accel.a 00:06:35.383 SO libspdk_event_accel.so.6.0 00:06:35.383 SYMLINK libspdk_event_accel.so 00:06:35.642 CC module/event/subsystems/bdev/bdev.o 00:06:35.642 LIB libspdk_event_bdev.a 00:06:35.919 SO libspdk_event_bdev.so.6.0 00:06:35.919 SYMLINK libspdk_event_bdev.so 00:06:35.919 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:06:35.919 CC module/event/subsystems/nbd/nbd.o 00:06:35.919 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:06:35.919 CC module/event/subsystems/ublk/ublk.o 00:06:35.919 CC module/event/subsystems/scsi/scsi.o 00:06:36.185 LIB libspdk_event_nbd.a 00:06:36.185 LIB libspdk_event_ublk.a 00:06:36.185 SO libspdk_event_nbd.so.6.0 00:06:36.185 LIB libspdk_event_scsi.a 00:06:36.185 SO libspdk_event_ublk.so.3.0 00:06:36.185 SO libspdk_event_scsi.so.6.0 00:06:36.185 SYMLINK libspdk_event_nbd.so 00:06:36.185 SYMLINK libspdk_event_ublk.so 00:06:36.185 SYMLINK libspdk_event_scsi.so 00:06:36.185 LIB libspdk_event_nvmf.a 00:06:36.185 SO libspdk_event_nvmf.so.6.0 00:06:36.443 SYMLINK libspdk_event_nvmf.so 00:06:36.443 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:06:36.443 CC module/event/subsystems/iscsi/iscsi.o 00:06:36.443 LIB libspdk_event_vhost_scsi.a 00:06:36.443 SO libspdk_event_vhost_scsi.so.3.0 00:06:36.443 LIB libspdk_event_iscsi.a 00:06:36.701 SO libspdk_event_iscsi.so.6.0 00:06:36.701 SYMLINK libspdk_event_vhost_scsi.so 00:06:36.701 SYMLINK libspdk_event_iscsi.so 00:06:36.701 SO libspdk.so.6.0 00:06:36.701 SYMLINK libspdk.so 00:06:36.962 CC app/trace_record/trace_record.o 00:06:36.962 CXX app/trace/trace.o 00:06:36.962 CC app/spdk_top/spdk_top.o 00:06:36.962 CC app/spdk_nvme_perf/perf.o 00:06:36.962 CC app/spdk_lspci/spdk_lspci.o 00:06:36.962 CC app/spdk_nvme_discover/discovery_aer.o 00:06:36.962 CC app/spdk_nvme_identify/identify.o 00:06:36.962 TEST_HEADER include/spdk/accel.h 00:06:36.962 TEST_HEADER include/spdk/accel_module.h 00:06:36.962 TEST_HEADER include/spdk/assert.h 00:06:36.962 CC test/rpc_client/rpc_client_test.o 00:06:36.962 TEST_HEADER include/spdk/barrier.h 00:06:36.962 TEST_HEADER include/spdk/bdev.h 00:06:36.962 TEST_HEADER include/spdk/base64.h 00:06:36.962 TEST_HEADER include/spdk/bdev_module.h 00:06:36.962 TEST_HEADER include/spdk/bdev_zone.h 00:06:36.962 TEST_HEADER include/spdk/bit_array.h 00:06:36.962 TEST_HEADER include/spdk/bit_pool.h 00:06:36.962 TEST_HEADER include/spdk/blob_bdev.h 00:06:36.962 TEST_HEADER include/spdk/blobfs_bdev.h 00:06:36.962 TEST_HEADER include/spdk/blobfs.h 00:06:36.962 TEST_HEADER include/spdk/blob.h 00:06:36.962 TEST_HEADER include/spdk/conf.h 00:06:36.962 TEST_HEADER include/spdk/config.h 00:06:36.962 TEST_HEADER include/spdk/cpuset.h 00:06:36.962 TEST_HEADER include/spdk/crc16.h 00:06:36.962 TEST_HEADER include/spdk/crc32.h 00:06:36.962 TEST_HEADER include/spdk/crc64.h 00:06:36.962 TEST_HEADER include/spdk/dif.h 00:06:36.962 TEST_HEADER include/spdk/dma.h 00:06:36.962 TEST_HEADER include/spdk/endian.h 00:06:36.962 TEST_HEADER include/spdk/env_dpdk.h 00:06:36.962 TEST_HEADER include/spdk/env.h 00:06:36.962 TEST_HEADER include/spdk/event.h 00:06:36.962 TEST_HEADER include/spdk/fd.h 00:06:36.962 TEST_HEADER include/spdk/fd_group.h 00:06:36.962 TEST_HEADER include/spdk/file.h 00:06:36.962 TEST_HEADER include/spdk/fsdev.h 00:06:36.962 TEST_HEADER include/spdk/fsdev_module.h 00:06:36.962 TEST_HEADER include/spdk/ftl.h 00:06:36.962 TEST_HEADER include/spdk/fuse_dispatcher.h 00:06:36.962 TEST_HEADER include/spdk/gpt_spec.h 00:06:36.962 TEST_HEADER include/spdk/hexlify.h 00:06:36.962 TEST_HEADER include/spdk/histogram_data.h 00:06:36.962 TEST_HEADER include/spdk/idxd.h 00:06:36.962 TEST_HEADER include/spdk/idxd_spec.h 00:06:36.962 TEST_HEADER include/spdk/init.h 00:06:36.962 TEST_HEADER include/spdk/ioat.h 00:06:36.962 TEST_HEADER include/spdk/ioat_spec.h 00:06:36.962 TEST_HEADER include/spdk/iscsi_spec.h 00:06:36.962 TEST_HEADER include/spdk/json.h 00:06:36.962 TEST_HEADER include/spdk/jsonrpc.h 00:06:36.962 TEST_HEADER include/spdk/keyring.h 00:06:36.962 TEST_HEADER include/spdk/keyring_module.h 00:06:36.962 TEST_HEADER include/spdk/likely.h 00:06:36.962 TEST_HEADER include/spdk/log.h 00:06:36.962 TEST_HEADER include/spdk/lvol.h 00:06:36.962 TEST_HEADER include/spdk/md5.h 00:06:36.962 TEST_HEADER include/spdk/memory.h 00:06:36.962 TEST_HEADER include/spdk/mmio.h 00:06:36.962 TEST_HEADER include/spdk/nbd.h 00:06:36.962 TEST_HEADER include/spdk/net.h 00:06:36.962 TEST_HEADER include/spdk/notify.h 00:06:36.962 TEST_HEADER include/spdk/nvme.h 00:06:36.962 TEST_HEADER include/spdk/nvme_intel.h 00:06:36.962 TEST_HEADER include/spdk/nvme_ocssd.h 00:06:36.962 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:06:36.962 TEST_HEADER include/spdk/nvme_spec.h 00:06:36.963 TEST_HEADER include/spdk/nvmf_cmd.h 00:06:36.963 TEST_HEADER include/spdk/nvme_zns.h 00:06:36.963 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:06:36.963 TEST_HEADER include/spdk/nvmf.h 00:06:36.963 TEST_HEADER include/spdk/nvmf_spec.h 00:06:36.963 TEST_HEADER include/spdk/nvmf_transport.h 00:06:36.963 TEST_HEADER include/spdk/opal.h 00:06:36.963 TEST_HEADER include/spdk/opal_spec.h 00:06:36.963 TEST_HEADER include/spdk/pci_ids.h 00:06:36.963 TEST_HEADER include/spdk/pipe.h 00:06:36.963 TEST_HEADER include/spdk/queue.h 00:06:36.963 TEST_HEADER include/spdk/reduce.h 00:06:36.963 TEST_HEADER include/spdk/rpc.h 00:06:36.963 TEST_HEADER include/spdk/scheduler.h 00:06:36.963 TEST_HEADER include/spdk/scsi.h 00:06:36.963 TEST_HEADER include/spdk/scsi_spec.h 00:06:36.963 TEST_HEADER include/spdk/sock.h 00:06:36.963 TEST_HEADER include/spdk/stdinc.h 00:06:36.963 TEST_HEADER include/spdk/string.h 00:06:36.963 TEST_HEADER include/spdk/thread.h 00:06:36.963 TEST_HEADER include/spdk/trace.h 00:06:36.963 TEST_HEADER include/spdk/trace_parser.h 00:06:36.963 TEST_HEADER include/spdk/tree.h 00:06:36.963 TEST_HEADER include/spdk/ublk.h 00:06:36.963 TEST_HEADER include/spdk/util.h 00:06:36.963 TEST_HEADER include/spdk/uuid.h 00:06:36.963 TEST_HEADER include/spdk/version.h 00:06:36.963 TEST_HEADER include/spdk/vfio_user_pci.h 00:06:36.963 TEST_HEADER include/spdk/vhost.h 00:06:36.963 TEST_HEADER include/spdk/vfio_user_spec.h 00:06:36.963 TEST_HEADER include/spdk/vmd.h 00:06:36.963 CC examples/interrupt_tgt/interrupt_tgt.o 00:06:36.963 TEST_HEADER include/spdk/xor.h 00:06:36.963 TEST_HEADER include/spdk/zipf.h 00:06:36.963 CXX test/cpp_headers/accel.o 00:06:36.963 CXX test/cpp_headers/accel_module.o 00:06:36.963 CXX test/cpp_headers/assert.o 00:06:36.963 CXX test/cpp_headers/barrier.o 00:06:36.963 CXX test/cpp_headers/base64.o 00:06:36.963 CXX test/cpp_headers/bdev.o 00:06:36.963 CXX test/cpp_headers/bdev_module.o 00:06:36.963 CXX test/cpp_headers/bdev_zone.o 00:06:36.963 CXX test/cpp_headers/bit_array.o 00:06:36.963 CXX test/cpp_headers/bit_pool.o 00:06:36.963 CXX test/cpp_headers/blob_bdev.o 00:06:36.963 CXX test/cpp_headers/blobfs_bdev.o 00:06:36.963 CXX test/cpp_headers/blobfs.o 00:06:36.963 CXX test/cpp_headers/blob.o 00:06:36.963 CXX test/cpp_headers/conf.o 00:06:36.963 CXX test/cpp_headers/config.o 00:06:36.963 CXX test/cpp_headers/cpuset.o 00:06:36.963 CC app/nvmf_tgt/nvmf_main.o 00:06:36.963 CXX test/cpp_headers/crc16.o 00:06:36.963 CC app/spdk_dd/spdk_dd.o 00:06:36.963 CC app/iscsi_tgt/iscsi_tgt.o 00:06:36.963 CXX test/cpp_headers/crc32.o 00:06:36.963 CC app/spdk_tgt/spdk_tgt.o 00:06:36.963 CC examples/ioat/verify/verify.o 00:06:36.963 CC examples/ioat/perf/perf.o 00:06:36.963 CC examples/util/zipf/zipf.o 00:06:36.963 CC app/fio/nvme/fio_plugin.o 00:06:36.963 CC test/app/histogram_perf/histogram_perf.o 00:06:36.963 CC test/env/vtophys/vtophys.o 00:06:36.963 CC test/app/stub/stub.o 00:06:36.963 CC test/env/memory/memory_ut.o 00:06:36.963 CC test/thread/poller_perf/poller_perf.o 00:06:36.963 CC test/app/jsoncat/jsoncat.o 00:06:37.224 CC test/env/pci/pci_ut.o 00:06:37.224 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:06:37.224 CC app/fio/bdev/fio_plugin.o 00:06:37.224 CC test/dma/test_dma/test_dma.o 00:06:37.224 CC test/app/bdev_svc/bdev_svc.o 00:06:37.224 LINK spdk_lspci 00:06:37.224 CC test/env/mem_callbacks/mem_callbacks.o 00:06:37.224 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:06:37.488 LINK rpc_client_test 00:06:37.488 LINK spdk_nvme_discover 00:06:37.488 LINK interrupt_tgt 00:06:37.488 LINK jsoncat 00:06:37.488 LINK poller_perf 00:06:37.488 LINK zipf 00:06:37.488 LINK histogram_perf 00:06:37.488 LINK vtophys 00:06:37.488 CXX test/cpp_headers/crc64.o 00:06:37.488 LINK env_dpdk_post_init 00:06:37.488 CXX test/cpp_headers/dif.o 00:06:37.488 CXX test/cpp_headers/dma.o 00:06:37.488 CXX test/cpp_headers/endian.o 00:06:37.488 CXX test/cpp_headers/env_dpdk.o 00:06:37.488 CXX test/cpp_headers/env.o 00:06:37.488 CXX test/cpp_headers/event.o 00:06:37.488 CXX test/cpp_headers/fd_group.o 00:06:37.488 CXX test/cpp_headers/fd.o 00:06:37.488 CXX test/cpp_headers/file.o 00:06:37.488 LINK iscsi_tgt 00:06:37.488 LINK nvmf_tgt 00:06:37.488 LINK stub 00:06:37.488 CXX test/cpp_headers/fsdev.o 00:06:37.488 CXX test/cpp_headers/fsdev_module.o 00:06:37.488 LINK spdk_trace_record 00:06:37.488 CXX test/cpp_headers/ftl.o 00:06:37.488 CXX test/cpp_headers/fuse_dispatcher.o 00:06:37.488 LINK verify 00:06:37.488 LINK ioat_perf 00:06:37.488 CXX test/cpp_headers/gpt_spec.o 00:06:37.488 CXX test/cpp_headers/hexlify.o 00:06:37.488 LINK bdev_svc 00:06:37.488 LINK spdk_tgt 00:06:37.488 CXX test/cpp_headers/histogram_data.o 00:06:37.751 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:06:37.751 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:06:37.751 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:06:37.751 CXX test/cpp_headers/idxd.o 00:06:37.751 CXX test/cpp_headers/idxd_spec.o 00:06:37.751 CXX test/cpp_headers/init.o 00:06:37.751 CXX test/cpp_headers/ioat.o 00:06:37.751 LINK spdk_dd 00:06:37.751 CXX test/cpp_headers/ioat_spec.o 00:06:37.751 CXX test/cpp_headers/iscsi_spec.o 00:06:38.014 CXX test/cpp_headers/json.o 00:06:38.014 CXX test/cpp_headers/jsonrpc.o 00:06:38.014 CXX test/cpp_headers/keyring.o 00:06:38.014 LINK spdk_trace 00:06:38.014 CXX test/cpp_headers/keyring_module.o 00:06:38.014 CXX test/cpp_headers/likely.o 00:06:38.014 CXX test/cpp_headers/log.o 00:06:38.014 CXX test/cpp_headers/lvol.o 00:06:38.014 CXX test/cpp_headers/md5.o 00:06:38.014 CXX test/cpp_headers/memory.o 00:06:38.014 CXX test/cpp_headers/mmio.o 00:06:38.014 CXX test/cpp_headers/nbd.o 00:06:38.014 CXX test/cpp_headers/net.o 00:06:38.014 CXX test/cpp_headers/notify.o 00:06:38.014 LINK pci_ut 00:06:38.014 CXX test/cpp_headers/nvme.o 00:06:38.014 CXX test/cpp_headers/nvme_intel.o 00:06:38.014 CXX test/cpp_headers/nvme_ocssd.o 00:06:38.014 CXX test/cpp_headers/nvme_ocssd_spec.o 00:06:38.014 CXX test/cpp_headers/nvme_spec.o 00:06:38.014 CXX test/cpp_headers/nvme_zns.o 00:06:38.014 CXX test/cpp_headers/nvmf_cmd.o 00:06:38.014 CXX test/cpp_headers/nvmf_fc_spec.o 00:06:38.014 CXX test/cpp_headers/nvmf.o 00:06:38.014 CC examples/vmd/lsvmd/lsvmd.o 00:06:38.014 CC examples/sock/hello_world/hello_sock.o 00:06:38.014 CC examples/vmd/led/led.o 00:06:38.014 LINK nvme_fuzz 00:06:38.014 CC examples/idxd/perf/perf.o 00:06:38.275 LINK spdk_bdev 00:06:38.275 CC examples/thread/thread/thread_ex.o 00:06:38.275 CXX test/cpp_headers/nvmf_spec.o 00:06:38.275 LINK test_dma 00:06:38.275 CXX test/cpp_headers/nvmf_transport.o 00:06:38.275 LINK spdk_nvme 00:06:38.275 CC test/event/event_perf/event_perf.o 00:06:38.275 CC test/event/reactor/reactor.o 00:06:38.275 CXX test/cpp_headers/opal.o 00:06:38.275 CC test/event/reactor_perf/reactor_perf.o 00:06:38.275 CXX test/cpp_headers/opal_spec.o 00:06:38.275 CXX test/cpp_headers/pci_ids.o 00:06:38.275 CXX test/cpp_headers/pipe.o 00:06:38.275 CXX test/cpp_headers/queue.o 00:06:38.275 CC test/event/app_repeat/app_repeat.o 00:06:38.275 CXX test/cpp_headers/reduce.o 00:06:38.275 CXX test/cpp_headers/rpc.o 00:06:38.275 CXX test/cpp_headers/scheduler.o 00:06:38.275 CXX test/cpp_headers/scsi.o 00:06:38.275 CXX test/cpp_headers/scsi_spec.o 00:06:38.275 CXX test/cpp_headers/stdinc.o 00:06:38.275 CXX test/cpp_headers/sock.o 00:06:38.275 CC test/event/scheduler/scheduler.o 00:06:38.275 CXX test/cpp_headers/string.o 00:06:38.275 CXX test/cpp_headers/thread.o 00:06:38.275 CXX test/cpp_headers/trace.o 00:06:38.275 CXX test/cpp_headers/trace_parser.o 00:06:38.535 CXX test/cpp_headers/tree.o 00:06:38.535 CXX test/cpp_headers/ublk.o 00:06:38.535 LINK lsvmd 00:06:38.535 LINK led 00:06:38.535 CXX test/cpp_headers/util.o 00:06:38.535 CXX test/cpp_headers/uuid.o 00:06:38.535 CXX test/cpp_headers/version.o 00:06:38.535 CXX test/cpp_headers/vfio_user_pci.o 00:06:38.535 CXX test/cpp_headers/vfio_user_spec.o 00:06:38.535 CXX test/cpp_headers/vhost.o 00:06:38.535 CXX test/cpp_headers/vmd.o 00:06:38.535 CXX test/cpp_headers/xor.o 00:06:38.535 LINK spdk_nvme_perf 00:06:38.535 CXX test/cpp_headers/zipf.o 00:06:38.535 CC app/vhost/vhost.o 00:06:38.535 LINK vhost_fuzz 00:06:38.535 LINK reactor 00:06:38.535 LINK event_perf 00:06:38.535 LINK mem_callbacks 00:06:38.535 LINK spdk_nvme_identify 00:06:38.535 LINK reactor_perf 00:06:38.535 LINK hello_sock 00:06:38.800 LINK app_repeat 00:06:38.800 LINK thread 00:06:38.800 LINK spdk_top 00:06:38.800 LINK idxd_perf 00:06:38.800 CC test/nvme/reset/reset.o 00:06:38.800 CC test/nvme/e2edp/nvme_dp.o 00:06:38.800 CC test/nvme/overhead/overhead.o 00:06:38.800 CC test/nvme/sgl/sgl.o 00:06:38.800 CC test/nvme/aer/aer.o 00:06:38.800 LINK scheduler 00:06:38.800 CC test/nvme/err_injection/err_injection.o 00:06:38.800 CC test/nvme/reserve/reserve.o 00:06:38.800 LINK vhost 00:06:38.800 CC test/nvme/startup/startup.o 00:06:38.800 CC test/nvme/simple_copy/simple_copy.o 00:06:38.800 CC test/nvme/connect_stress/connect_stress.o 00:06:38.800 CC test/nvme/compliance/nvme_compliance.o 00:06:38.800 CC test/nvme/boot_partition/boot_partition.o 00:06:38.800 CC test/blobfs/mkfs/mkfs.o 00:06:39.059 CC test/nvme/fused_ordering/fused_ordering.o 00:06:39.059 CC test/nvme/cuse/cuse.o 00:06:39.059 CC test/nvme/doorbell_aers/doorbell_aers.o 00:06:39.059 CC test/nvme/fdp/fdp.o 00:06:39.059 CC test/accel/dif/dif.o 00:06:39.059 CC test/lvol/esnap/esnap.o 00:06:39.059 CC examples/nvme/abort/abort.o 00:06:39.059 CC examples/nvme/reconnect/reconnect.o 00:06:39.059 LINK boot_partition 00:06:39.059 CC examples/nvme/nvme_manage/nvme_manage.o 00:06:39.059 CC examples/nvme/arbitration/arbitration.o 00:06:39.059 CC examples/nvme/hello_world/hello_world.o 00:06:39.059 CC examples/nvme/hotplug/hotplug.o 00:06:39.059 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:06:39.059 CC examples/nvme/cmb_copy/cmb_copy.o 00:06:39.059 LINK startup 00:06:39.059 LINK connect_stress 00:06:39.059 LINK err_injection 00:06:39.317 LINK reserve 00:06:39.317 LINK doorbell_aers 00:06:39.317 LINK fused_ordering 00:06:39.317 LINK simple_copy 00:06:39.317 LINK memory_ut 00:06:39.317 LINK reset 00:06:39.317 CC examples/accel/perf/accel_perf.o 00:06:39.317 LINK sgl 00:06:39.317 LINK aer 00:06:39.317 LINK mkfs 00:06:39.317 CC examples/fsdev/hello_world/hello_fsdev.o 00:06:39.317 CC examples/blob/cli/blobcli.o 00:06:39.317 LINK nvme_compliance 00:06:39.317 CC examples/blob/hello_world/hello_blob.o 00:06:39.317 LINK overhead 00:06:39.317 LINK nvme_dp 00:06:39.575 LINK hello_world 00:06:39.575 LINK cmb_copy 00:06:39.575 LINK pmr_persistence 00:06:39.575 LINK hotplug 00:06:39.575 LINK fdp 00:06:39.575 LINK abort 00:06:39.575 LINK reconnect 00:06:39.575 LINK arbitration 00:06:39.575 LINK hello_blob 00:06:39.833 LINK nvme_manage 00:06:39.833 LINK hello_fsdev 00:06:39.833 LINK accel_perf 00:06:39.833 LINK dif 00:06:39.833 LINK blobcli 00:06:40.091 LINK iscsi_fuzz 00:06:40.349 CC examples/bdev/hello_world/hello_bdev.o 00:06:40.349 CC examples/bdev/bdevperf/bdevperf.o 00:06:40.349 CC test/bdev/bdevio/bdevio.o 00:06:40.608 LINK hello_bdev 00:06:40.608 LINK cuse 00:06:40.608 LINK bdevio 00:06:41.175 LINK bdevperf 00:06:41.433 CC examples/nvmf/nvmf/nvmf.o 00:06:41.691 LINK nvmf 00:06:44.229 LINK esnap 00:06:44.487 00:06:44.487 real 1m9.424s 00:06:44.487 user 11m52.693s 00:06:44.487 sys 2m38.927s 00:06:44.487 06:17:16 make -- common/autotest_common.sh@1128 -- $ xtrace_disable 00:06:44.487 06:17:16 make -- common/autotest_common.sh@10 -- $ set +x 00:06:44.487 ************************************ 00:06:44.487 END TEST make 00:06:44.487 ************************************ 00:06:44.487 06:17:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:06:44.487 06:17:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:06:44.487 06:17:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:06:44.487 06:17:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:44.487 06:17:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:06:44.487 06:17:16 -- pm/common@44 -- $ pid=1893180 00:06:44.487 06:17:16 -- pm/common@50 -- $ kill -TERM 1893180 00:06:44.487 06:17:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:44.487 06:17:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:06:44.487 06:17:16 -- pm/common@44 -- $ pid=1893182 00:06:44.487 06:17:16 -- pm/common@50 -- $ kill -TERM 1893182 00:06:44.487 06:17:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:44.487 06:17:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:06:44.487 06:17:16 -- pm/common@44 -- $ pid=1893183 00:06:44.487 06:17:16 -- pm/common@50 -- $ kill -TERM 1893183 00:06:44.487 06:17:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:06:44.487 06:17:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:06:44.487 06:17:16 -- pm/common@44 -- $ pid=1893213 00:06:44.487 06:17:16 -- pm/common@50 -- $ sudo -E kill -TERM 1893213 00:06:44.487 06:17:16 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:06:44.487 06:17:16 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:06:44.487 06:17:16 -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:06:44.487 06:17:16 -- common/autotest_common.sh@1691 -- # lcov --version 00:06:44.487 06:17:16 -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:06:44.746 06:17:16 -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:06:44.746 06:17:16 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:44.746 06:17:16 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:44.746 06:17:16 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:44.746 06:17:16 -- scripts/common.sh@336 -- # IFS=.-: 00:06:44.746 06:17:16 -- scripts/common.sh@336 -- # read -ra ver1 00:06:44.746 06:17:16 -- scripts/common.sh@337 -- # IFS=.-: 00:06:44.746 06:17:16 -- scripts/common.sh@337 -- # read -ra ver2 00:06:44.746 06:17:16 -- scripts/common.sh@338 -- # local 'op=<' 00:06:44.746 06:17:16 -- scripts/common.sh@340 -- # ver1_l=2 00:06:44.746 06:17:16 -- scripts/common.sh@341 -- # ver2_l=1 00:06:44.746 06:17:16 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:44.746 06:17:16 -- scripts/common.sh@344 -- # case "$op" in 00:06:44.746 06:17:16 -- scripts/common.sh@345 -- # : 1 00:06:44.746 06:17:16 -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:44.746 06:17:16 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:44.746 06:17:16 -- scripts/common.sh@365 -- # decimal 1 00:06:44.746 06:17:16 -- scripts/common.sh@353 -- # local d=1 00:06:44.746 06:17:16 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:44.746 06:17:16 -- scripts/common.sh@355 -- # echo 1 00:06:44.746 06:17:16 -- scripts/common.sh@365 -- # ver1[v]=1 00:06:44.746 06:17:16 -- scripts/common.sh@366 -- # decimal 2 00:06:44.746 06:17:16 -- scripts/common.sh@353 -- # local d=2 00:06:44.746 06:17:16 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:44.746 06:17:16 -- scripts/common.sh@355 -- # echo 2 00:06:44.746 06:17:16 -- scripts/common.sh@366 -- # ver2[v]=2 00:06:44.746 06:17:16 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:44.746 06:17:16 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:44.746 06:17:16 -- scripts/common.sh@368 -- # return 0 00:06:44.746 06:17:16 -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:44.746 06:17:16 -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:06:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.746 --rc genhtml_branch_coverage=1 00:06:44.746 --rc genhtml_function_coverage=1 00:06:44.746 --rc genhtml_legend=1 00:06:44.746 --rc geninfo_all_blocks=1 00:06:44.746 --rc geninfo_unexecuted_blocks=1 00:06:44.746 00:06:44.746 ' 00:06:44.746 06:17:16 -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:06:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.746 --rc genhtml_branch_coverage=1 00:06:44.746 --rc genhtml_function_coverage=1 00:06:44.746 --rc genhtml_legend=1 00:06:44.746 --rc geninfo_all_blocks=1 00:06:44.746 --rc geninfo_unexecuted_blocks=1 00:06:44.746 00:06:44.746 ' 00:06:44.746 06:17:16 -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:06:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.746 --rc genhtml_branch_coverage=1 00:06:44.746 --rc genhtml_function_coverage=1 00:06:44.746 --rc genhtml_legend=1 00:06:44.746 --rc geninfo_all_blocks=1 00:06:44.746 --rc geninfo_unexecuted_blocks=1 00:06:44.746 00:06:44.746 ' 00:06:44.746 06:17:16 -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:06:44.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:44.746 --rc genhtml_branch_coverage=1 00:06:44.746 --rc genhtml_function_coverage=1 00:06:44.746 --rc genhtml_legend=1 00:06:44.746 --rc geninfo_all_blocks=1 00:06:44.746 --rc geninfo_unexecuted_blocks=1 00:06:44.746 00:06:44.746 ' 00:06:44.746 06:17:16 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:44.746 06:17:16 -- nvmf/common.sh@7 -- # uname -s 00:06:44.746 06:17:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:44.746 06:17:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:44.746 06:17:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:44.746 06:17:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:44.746 06:17:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:44.746 06:17:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:44.746 06:17:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:44.747 06:17:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:44.747 06:17:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:44.747 06:17:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:44.747 06:17:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:06:44.747 06:17:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:06:44.747 06:17:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:44.747 06:17:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:44.747 06:17:16 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:44.747 06:17:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:44.747 06:17:16 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:44.747 06:17:16 -- scripts/common.sh@15 -- # shopt -s extglob 00:06:44.747 06:17:16 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:44.747 06:17:16 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:44.747 06:17:16 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:44.747 06:17:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.747 06:17:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.747 06:17:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.747 06:17:16 -- paths/export.sh@5 -- # export PATH 00:06:44.747 06:17:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:44.747 06:17:16 -- nvmf/common.sh@51 -- # : 0 00:06:44.747 06:17:16 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:44.747 06:17:16 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:44.747 06:17:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:44.747 06:17:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:44.747 06:17:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:44.747 06:17:16 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:44.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:44.747 06:17:16 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:44.747 06:17:16 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:44.747 06:17:16 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:44.747 06:17:16 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:06:44.747 06:17:16 -- spdk/autotest.sh@32 -- # uname -s 00:06:44.747 06:17:16 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:06:44.747 06:17:16 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:06:44.747 06:17:16 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:44.747 06:17:16 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:06:44.747 06:17:16 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:06:44.747 06:17:16 -- spdk/autotest.sh@44 -- # modprobe nbd 00:06:44.747 06:17:16 -- spdk/autotest.sh@46 -- # type -P udevadm 00:06:44.747 06:17:16 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:06:44.747 06:17:16 -- spdk/autotest.sh@48 -- # udevadm_pid=1952599 00:06:44.747 06:17:16 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:06:44.747 06:17:16 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:06:44.747 06:17:16 -- pm/common@17 -- # local monitor 00:06:44.747 06:17:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:44.747 06:17:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:44.747 06:17:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:44.747 06:17:16 -- pm/common@21 -- # date +%s 00:06:44.747 06:17:16 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:06:44.747 06:17:16 -- pm/common@21 -- # date +%s 00:06:44.747 06:17:16 -- pm/common@25 -- # sleep 1 00:06:44.747 06:17:16 -- pm/common@21 -- # date +%s 00:06:44.747 06:17:16 -- pm/common@21 -- # date +%s 00:06:44.747 06:17:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079836 00:06:44.747 06:17:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079836 00:06:44.747 06:17:16 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079836 00:06:44.747 06:17:16 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732079836 00:06:44.747 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079836_collect-vmstat.pm.log 00:06:44.747 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079836_collect-cpu-load.pm.log 00:06:44.747 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079836_collect-cpu-temp.pm.log 00:06:44.747 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732079836_collect-bmc-pm.bmc.pm.log 00:06:45.684 06:17:17 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:06:45.684 06:17:17 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:06:45.684 06:17:17 -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:45.684 06:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:45.684 06:17:17 -- spdk/autotest.sh@59 -- # create_test_list 00:06:45.684 06:17:17 -- common/autotest_common.sh@750 -- # xtrace_disable 00:06:45.684 06:17:17 -- common/autotest_common.sh@10 -- # set +x 00:06:45.684 06:17:17 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:06:45.684 06:17:17 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:45.684 06:17:17 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:45.684 06:17:17 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:06:45.684 06:17:17 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:45.684 06:17:17 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:06:45.684 06:17:17 -- common/autotest_common.sh@1455 -- # uname 00:06:45.684 06:17:17 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:06:45.684 06:17:17 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:06:45.684 06:17:17 -- common/autotest_common.sh@1475 -- # uname 00:06:45.684 06:17:17 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:06:45.684 06:17:17 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:06:45.684 06:17:17 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:06:45.942 lcov: LCOV version 1.15 00:06:45.942 06:17:17 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:07:04.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:07:04.113 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:07:26.033 06:17:55 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:07:26.034 06:17:55 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:26.034 06:17:55 -- common/autotest_common.sh@10 -- # set +x 00:07:26.034 06:17:55 -- spdk/autotest.sh@78 -- # rm -f 00:07:26.034 06:17:55 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:26.034 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:07:26.034 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:07:26.034 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:07:26.034 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:07:26.034 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:07:26.034 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:07:26.034 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:07:26.034 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:07:26.034 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:07:26.034 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:07:26.034 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:07:26.034 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:07:26.034 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:07:26.034 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:07:26.034 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:07:26.034 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:07:26.034 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:07:26.034 06:17:56 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:07:26.034 06:17:56 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:07:26.034 06:17:56 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:07:26.034 06:17:56 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:07:26.034 06:17:56 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:07:26.034 06:17:56 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:07:26.034 06:17:56 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:07:26.034 06:17:56 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:07:26.034 06:17:56 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:07:26.034 06:17:56 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:07:26.034 06:17:56 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:07:26.034 06:17:56 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:07:26.034 06:17:56 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:07:26.034 06:17:56 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:07:26.034 06:17:56 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:07:26.034 No valid GPT data, bailing 00:07:26.034 06:17:56 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:07:26.034 06:17:56 -- scripts/common.sh@394 -- # pt= 00:07:26.034 06:17:56 -- scripts/common.sh@395 -- # return 1 00:07:26.034 06:17:56 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:07:26.034 1+0 records in 00:07:26.034 1+0 records out 00:07:26.034 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00230546 s, 455 MB/s 00:07:26.034 06:17:56 -- spdk/autotest.sh@105 -- # sync 00:07:26.034 06:17:56 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:07:26.034 06:17:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:07:26.034 06:17:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:07:27.409 06:17:59 -- spdk/autotest.sh@111 -- # uname -s 00:07:27.409 06:17:59 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:07:27.409 06:17:59 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:07:27.409 06:17:59 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:07:28.783 Hugepages 00:07:28.783 node hugesize free / total 00:07:28.783 node0 1048576kB 0 / 0 00:07:28.783 node0 2048kB 0 / 0 00:07:28.783 node1 1048576kB 0 / 0 00:07:28.783 node1 2048kB 0 / 0 00:07:28.783 00:07:28.783 Type BDF Vendor Device NUMA Driver Device Block devices 00:07:28.783 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:07:28.783 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:07:28.783 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:07:28.783 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:07:28.783 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:07:28.783 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:07:28.783 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:07:28.783 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:07:28.783 NVMe 0000:0b:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:07:28.783 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:07:28.783 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:07:28.783 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:07:28.783 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:07:28.783 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:07:28.783 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:07:28.784 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:07:28.784 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:07:28.784 06:18:00 -- spdk/autotest.sh@117 -- # uname -s 00:07:28.784 06:18:00 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:07:28.784 06:18:00 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:07:28.784 06:18:00 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:29.720 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:29.720 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:29.979 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:29.979 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:29.979 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:29.979 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:29.979 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:29.979 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:29.979 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:29.979 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:29.979 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:29.979 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:29.979 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:29.979 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:29.979 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:29.979 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:30.917 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:07:31.177 06:18:02 -- common/autotest_common.sh@1515 -- # sleep 1 00:07:32.116 06:18:03 -- common/autotest_common.sh@1516 -- # bdfs=() 00:07:32.116 06:18:03 -- common/autotest_common.sh@1516 -- # local bdfs 00:07:32.116 06:18:03 -- common/autotest_common.sh@1518 -- # bdfs=($(get_nvme_bdfs)) 00:07:32.116 06:18:03 -- common/autotest_common.sh@1518 -- # get_nvme_bdfs 00:07:32.116 06:18:03 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:32.116 06:18:03 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:32.116 06:18:03 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:32.116 06:18:03 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:32.116 06:18:03 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:32.116 06:18:03 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:32.116 06:18:03 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:07:32.116 06:18:03 -- common/autotest_common.sh@1520 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:07:33.494 Waiting for block devices as requested 00:07:33.494 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:33.494 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:33.494 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:33.494 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:33.754 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:33.754 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:33.754 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:33.754 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:34.014 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:07:34.014 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:07:34.273 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:07:34.273 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:07:34.273 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:07:34.273 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:07:34.533 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:07:34.533 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:07:34.533 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:07:34.792 06:18:06 -- common/autotest_common.sh@1522 -- # for bdf in "${bdfs[@]}" 00:07:34.792 06:18:06 -- common/autotest_common.sh@1523 -- # get_nvme_ctrlr_from_bdf 0000:0b:00.0 00:07:34.792 06:18:06 -- common/autotest_common.sh@1485 -- # readlink -f /sys/class/nvme/nvme0 00:07:34.792 06:18:06 -- common/autotest_common.sh@1485 -- # grep 0000:0b:00.0/nvme/nvme 00:07:34.792 06:18:06 -- common/autotest_common.sh@1485 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:07:34.792 06:18:06 -- common/autotest_common.sh@1486 -- # [[ -z /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 ]] 00:07:34.792 06:18:06 -- common/autotest_common.sh@1490 -- # basename /sys/devices/pci0000:00/0000:00:03.2/0000:0b:00.0/nvme/nvme0 00:07:34.792 06:18:06 -- common/autotest_common.sh@1490 -- # printf '%s\n' nvme0 00:07:34.792 06:18:06 -- common/autotest_common.sh@1523 -- # nvme_ctrlr=/dev/nvme0 00:07:34.793 06:18:06 -- common/autotest_common.sh@1524 -- # [[ -z /dev/nvme0 ]] 00:07:34.793 06:18:06 -- common/autotest_common.sh@1529 -- # nvme id-ctrl /dev/nvme0 00:07:34.793 06:18:06 -- common/autotest_common.sh@1529 -- # grep oacs 00:07:34.793 06:18:06 -- common/autotest_common.sh@1529 -- # cut -d: -f2 00:07:34.793 06:18:06 -- common/autotest_common.sh@1529 -- # oacs=' 0xf' 00:07:34.793 06:18:06 -- common/autotest_common.sh@1530 -- # oacs_ns_manage=8 00:07:34.793 06:18:06 -- common/autotest_common.sh@1532 -- # [[ 8 -ne 0 ]] 00:07:34.793 06:18:06 -- common/autotest_common.sh@1538 -- # nvme id-ctrl /dev/nvme0 00:07:34.793 06:18:06 -- common/autotest_common.sh@1538 -- # grep unvmcap 00:07:34.793 06:18:06 -- common/autotest_common.sh@1538 -- # cut -d: -f2 00:07:34.793 06:18:06 -- common/autotest_common.sh@1538 -- # unvmcap=' 0' 00:07:34.793 06:18:06 -- common/autotest_common.sh@1539 -- # [[ 0 -eq 0 ]] 00:07:34.793 06:18:06 -- common/autotest_common.sh@1541 -- # continue 00:07:34.793 06:18:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:07:34.793 06:18:06 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:34.793 06:18:06 -- common/autotest_common.sh@10 -- # set +x 00:07:34.793 06:18:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:07:34.793 06:18:06 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:34.793 06:18:06 -- common/autotest_common.sh@10 -- # set +x 00:07:34.793 06:18:06 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:07:36.171 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:36.171 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:36.171 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:36.171 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:36.171 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:36.171 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:36.171 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:36.171 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:36.171 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:07:36.171 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:07:36.171 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:07:36.171 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:07:36.171 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:07:36.171 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:07:36.171 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:07:36.171 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:07:37.108 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:07:37.367 06:18:08 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:07:37.367 06:18:08 -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:37.367 06:18:08 -- common/autotest_common.sh@10 -- # set +x 00:07:37.367 06:18:08 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:07:37.367 06:18:08 -- common/autotest_common.sh@1576 -- # mapfile -t bdfs 00:07:37.367 06:18:08 -- common/autotest_common.sh@1576 -- # get_nvme_bdfs_by_id 0x0a54 00:07:37.367 06:18:08 -- common/autotest_common.sh@1561 -- # bdfs=() 00:07:37.367 06:18:08 -- common/autotest_common.sh@1561 -- # _bdfs=() 00:07:37.367 06:18:08 -- common/autotest_common.sh@1561 -- # local bdfs _bdfs 00:07:37.367 06:18:08 -- common/autotest_common.sh@1562 -- # _bdfs=($(get_nvme_bdfs)) 00:07:37.367 06:18:08 -- common/autotest_common.sh@1562 -- # get_nvme_bdfs 00:07:37.367 06:18:08 -- common/autotest_common.sh@1496 -- # bdfs=() 00:07:37.367 06:18:08 -- common/autotest_common.sh@1496 -- # local bdfs 00:07:37.367 06:18:08 -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:37.367 06:18:08 -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:07:37.367 06:18:08 -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:07:37.367 06:18:09 -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:07:37.367 06:18:09 -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:07:37.367 06:18:09 -- common/autotest_common.sh@1563 -- # for bdf in "${_bdfs[@]}" 00:07:37.367 06:18:09 -- common/autotest_common.sh@1564 -- # cat /sys/bus/pci/devices/0000:0b:00.0/device 00:07:37.367 06:18:09 -- common/autotest_common.sh@1564 -- # device=0x0a54 00:07:37.367 06:18:09 -- common/autotest_common.sh@1565 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:07:37.367 06:18:09 -- common/autotest_common.sh@1566 -- # bdfs+=($bdf) 00:07:37.367 06:18:09 -- common/autotest_common.sh@1570 -- # (( 1 > 0 )) 00:07:37.367 06:18:09 -- common/autotest_common.sh@1571 -- # printf '%s\n' 0000:0b:00.0 00:07:37.367 06:18:09 -- common/autotest_common.sh@1577 -- # [[ -z 0000:0b:00.0 ]] 00:07:37.367 06:18:09 -- common/autotest_common.sh@1582 -- # spdk_tgt_pid=1963709 00:07:37.367 06:18:09 -- common/autotest_common.sh@1581 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:37.367 06:18:09 -- common/autotest_common.sh@1583 -- # waitforlisten 1963709 00:07:37.367 06:18:09 -- common/autotest_common.sh@833 -- # '[' -z 1963709 ']' 00:07:37.367 06:18:09 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:37.367 06:18:09 -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:37.367 06:18:09 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:37.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:37.367 06:18:09 -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:37.367 06:18:09 -- common/autotest_common.sh@10 -- # set +x 00:07:37.367 [2024-11-20 06:18:09.115649] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:07:37.367 [2024-11-20 06:18:09.115748] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1963709 ] 00:07:37.367 [2024-11-20 06:18:09.182297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:37.625 [2024-11-20 06:18:09.242258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:37.882 06:18:09 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:37.883 06:18:09 -- common/autotest_common.sh@866 -- # return 0 00:07:37.883 06:18:09 -- common/autotest_common.sh@1585 -- # bdf_id=0 00:07:37.883 06:18:09 -- common/autotest_common.sh@1586 -- # for bdf in "${bdfs[@]}" 00:07:37.883 06:18:09 -- common/autotest_common.sh@1587 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:0b:00.0 00:07:41.166 nvme0n1 00:07:41.166 06:18:12 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:07:41.166 [2024-11-20 06:18:12.867000] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:07:41.166 [2024-11-20 06:18:12.867042] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:07:41.166 request: 00:07:41.166 { 00:07:41.166 "nvme_ctrlr_name": "nvme0", 00:07:41.166 "password": "test", 00:07:41.166 "method": "bdev_nvme_opal_revert", 00:07:41.166 "req_id": 1 00:07:41.166 } 00:07:41.166 Got JSON-RPC error response 00:07:41.166 response: 00:07:41.166 { 00:07:41.166 "code": -32603, 00:07:41.166 "message": "Internal error" 00:07:41.166 } 00:07:41.166 06:18:12 -- common/autotest_common.sh@1589 -- # true 00:07:41.166 06:18:12 -- common/autotest_common.sh@1590 -- # (( ++bdf_id )) 00:07:41.166 06:18:12 -- common/autotest_common.sh@1593 -- # killprocess 1963709 00:07:41.166 06:18:12 -- common/autotest_common.sh@952 -- # '[' -z 1963709 ']' 00:07:41.166 06:18:12 -- common/autotest_common.sh@956 -- # kill -0 1963709 00:07:41.166 06:18:12 -- common/autotest_common.sh@957 -- # uname 00:07:41.166 06:18:12 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:41.166 06:18:12 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1963709 00:07:41.166 06:18:12 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:41.166 06:18:12 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:41.166 06:18:12 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1963709' 00:07:41.166 killing process with pid 1963709 00:07:41.166 06:18:12 -- common/autotest_common.sh@971 -- # kill 1963709 00:07:41.166 06:18:12 -- common/autotest_common.sh@976 -- # wait 1963709 00:07:43.066 06:18:14 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:07:43.066 06:18:14 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:07:43.066 06:18:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:43.066 06:18:14 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:07:43.066 06:18:14 -- spdk/autotest.sh@149 -- # timing_enter lib 00:07:43.066 06:18:14 -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.066 06:18:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.066 06:18:14 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:07:43.066 06:18:14 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:43.066 06:18:14 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:43.066 06:18:14 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.066 06:18:14 -- common/autotest_common.sh@10 -- # set +x 00:07:43.066 ************************************ 00:07:43.066 START TEST env 00:07:43.066 ************************************ 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:07:43.066 * Looking for test storage... 00:07:43.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1691 -- # lcov --version 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:43.066 06:18:14 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.066 06:18:14 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.066 06:18:14 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.066 06:18:14 env -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.066 06:18:14 env -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.066 06:18:14 env -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.066 06:18:14 env -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.066 06:18:14 env -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.066 06:18:14 env -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.066 06:18:14 env -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.066 06:18:14 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.066 06:18:14 env -- scripts/common.sh@344 -- # case "$op" in 00:07:43.066 06:18:14 env -- scripts/common.sh@345 -- # : 1 00:07:43.066 06:18:14 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.066 06:18:14 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.066 06:18:14 env -- scripts/common.sh@365 -- # decimal 1 00:07:43.066 06:18:14 env -- scripts/common.sh@353 -- # local d=1 00:07:43.066 06:18:14 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.066 06:18:14 env -- scripts/common.sh@355 -- # echo 1 00:07:43.066 06:18:14 env -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.066 06:18:14 env -- scripts/common.sh@366 -- # decimal 2 00:07:43.066 06:18:14 env -- scripts/common.sh@353 -- # local d=2 00:07:43.066 06:18:14 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.066 06:18:14 env -- scripts/common.sh@355 -- # echo 2 00:07:43.066 06:18:14 env -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.066 06:18:14 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.066 06:18:14 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.066 06:18:14 env -- scripts/common.sh@368 -- # return 0 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:43.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.066 --rc genhtml_branch_coverage=1 00:07:43.066 --rc genhtml_function_coverage=1 00:07:43.066 --rc genhtml_legend=1 00:07:43.066 --rc geninfo_all_blocks=1 00:07:43.066 --rc geninfo_unexecuted_blocks=1 00:07:43.066 00:07:43.066 ' 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:43.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.066 --rc genhtml_branch_coverage=1 00:07:43.066 --rc genhtml_function_coverage=1 00:07:43.066 --rc genhtml_legend=1 00:07:43.066 --rc geninfo_all_blocks=1 00:07:43.066 --rc geninfo_unexecuted_blocks=1 00:07:43.066 00:07:43.066 ' 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:43.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.066 --rc genhtml_branch_coverage=1 00:07:43.066 --rc genhtml_function_coverage=1 00:07:43.066 --rc genhtml_legend=1 00:07:43.066 --rc geninfo_all_blocks=1 00:07:43.066 --rc geninfo_unexecuted_blocks=1 00:07:43.066 00:07:43.066 ' 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:43.066 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.066 --rc genhtml_branch_coverage=1 00:07:43.066 --rc genhtml_function_coverage=1 00:07:43.066 --rc genhtml_legend=1 00:07:43.066 --rc geninfo_all_blocks=1 00:07:43.066 --rc geninfo_unexecuted_blocks=1 00:07:43.066 00:07:43.066 ' 00:07:43.066 06:18:14 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:43.066 06:18:14 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.066 06:18:14 env -- common/autotest_common.sh@10 -- # set +x 00:07:43.066 ************************************ 00:07:43.066 START TEST env_memory 00:07:43.066 ************************************ 00:07:43.066 06:18:14 env.env_memory -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:07:43.067 00:07:43.067 00:07:43.067 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.067 http://cunit.sourceforge.net/ 00:07:43.067 00:07:43.067 00:07:43.067 Suite: memory 00:07:43.067 Test: alloc and free memory map ...[2024-11-20 06:18:14.898846] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:07:43.325 passed 00:07:43.325 Test: mem map translation ...[2024-11-20 06:18:14.919586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:07:43.325 [2024-11-20 06:18:14.919634] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:07:43.325 [2024-11-20 06:18:14.919682] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:07:43.325 [2024-11-20 06:18:14.919694] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:07:43.325 passed 00:07:43.325 Test: mem map registration ...[2024-11-20 06:18:14.962361] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:07:43.325 [2024-11-20 06:18:14.962382] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:07:43.325 passed 00:07:43.325 Test: mem map adjacent registrations ...passed 00:07:43.325 00:07:43.325 Run Summary: Type Total Ran Passed Failed Inactive 00:07:43.325 suites 1 1 n/a 0 0 00:07:43.325 tests 4 4 4 0 0 00:07:43.326 asserts 152 152 152 0 n/a 00:07:43.326 00:07:43.326 Elapsed time = 0.145 seconds 00:07:43.326 00:07:43.326 real 0m0.154s 00:07:43.326 user 0m0.146s 00:07:43.326 sys 0m0.008s 00:07:43.326 06:18:15 env.env_memory -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:43.326 06:18:15 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:07:43.326 ************************************ 00:07:43.326 END TEST env_memory 00:07:43.326 ************************************ 00:07:43.326 06:18:15 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:43.326 06:18:15 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:43.326 06:18:15 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:43.326 06:18:15 env -- common/autotest_common.sh@10 -- # set +x 00:07:43.326 ************************************ 00:07:43.326 START TEST env_vtophys 00:07:43.326 ************************************ 00:07:43.326 06:18:15 env.env_vtophys -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:07:43.326 EAL: lib.eal log level changed from notice to debug 00:07:43.326 EAL: Detected lcore 0 as core 0 on socket 0 00:07:43.326 EAL: Detected lcore 1 as core 1 on socket 0 00:07:43.326 EAL: Detected lcore 2 as core 2 on socket 0 00:07:43.326 EAL: Detected lcore 3 as core 3 on socket 0 00:07:43.326 EAL: Detected lcore 4 as core 4 on socket 0 00:07:43.326 EAL: Detected lcore 5 as core 5 on socket 0 00:07:43.326 EAL: Detected lcore 6 as core 8 on socket 0 00:07:43.326 EAL: Detected lcore 7 as core 9 on socket 0 00:07:43.326 EAL: Detected lcore 8 as core 10 on socket 0 00:07:43.326 EAL: Detected lcore 9 as core 11 on socket 0 00:07:43.326 EAL: Detected lcore 10 as core 12 on socket 0 00:07:43.326 EAL: Detected lcore 11 as core 13 on socket 0 00:07:43.326 EAL: Detected lcore 12 as core 0 on socket 1 00:07:43.326 EAL: Detected lcore 13 as core 1 on socket 1 00:07:43.326 EAL: Detected lcore 14 as core 2 on socket 1 00:07:43.326 EAL: Detected lcore 15 as core 3 on socket 1 00:07:43.326 EAL: Detected lcore 16 as core 4 on socket 1 00:07:43.326 EAL: Detected lcore 17 as core 5 on socket 1 00:07:43.326 EAL: Detected lcore 18 as core 8 on socket 1 00:07:43.326 EAL: Detected lcore 19 as core 9 on socket 1 00:07:43.326 EAL: Detected lcore 20 as core 10 on socket 1 00:07:43.326 EAL: Detected lcore 21 as core 11 on socket 1 00:07:43.326 EAL: Detected lcore 22 as core 12 on socket 1 00:07:43.326 EAL: Detected lcore 23 as core 13 on socket 1 00:07:43.326 EAL: Detected lcore 24 as core 0 on socket 0 00:07:43.326 EAL: Detected lcore 25 as core 1 on socket 0 00:07:43.326 EAL: Detected lcore 26 as core 2 on socket 0 00:07:43.326 EAL: Detected lcore 27 as core 3 on socket 0 00:07:43.326 EAL: Detected lcore 28 as core 4 on socket 0 00:07:43.326 EAL: Detected lcore 29 as core 5 on socket 0 00:07:43.326 EAL: Detected lcore 30 as core 8 on socket 0 00:07:43.326 EAL: Detected lcore 31 as core 9 on socket 0 00:07:43.326 EAL: Detected lcore 32 as core 10 on socket 0 00:07:43.326 EAL: Detected lcore 33 as core 11 on socket 0 00:07:43.326 EAL: Detected lcore 34 as core 12 on socket 0 00:07:43.326 EAL: Detected lcore 35 as core 13 on socket 0 00:07:43.326 EAL: Detected lcore 36 as core 0 on socket 1 00:07:43.326 EAL: Detected lcore 37 as core 1 on socket 1 00:07:43.326 EAL: Detected lcore 38 as core 2 on socket 1 00:07:43.326 EAL: Detected lcore 39 as core 3 on socket 1 00:07:43.326 EAL: Detected lcore 40 as core 4 on socket 1 00:07:43.326 EAL: Detected lcore 41 as core 5 on socket 1 00:07:43.326 EAL: Detected lcore 42 as core 8 on socket 1 00:07:43.326 EAL: Detected lcore 43 as core 9 on socket 1 00:07:43.326 EAL: Detected lcore 44 as core 10 on socket 1 00:07:43.326 EAL: Detected lcore 45 as core 11 on socket 1 00:07:43.326 EAL: Detected lcore 46 as core 12 on socket 1 00:07:43.326 EAL: Detected lcore 47 as core 13 on socket 1 00:07:43.326 EAL: Maximum logical cores by configuration: 128 00:07:43.326 EAL: Detected CPU lcores: 48 00:07:43.326 EAL: Detected NUMA nodes: 2 00:07:43.326 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:07:43.326 EAL: Detected shared linkage of DPDK 00:07:43.326 EAL: No shared files mode enabled, IPC will be disabled 00:07:43.326 EAL: Bus pci wants IOVA as 'DC' 00:07:43.326 EAL: Buses did not request a specific IOVA mode. 00:07:43.326 EAL: IOMMU is available, selecting IOVA as VA mode. 00:07:43.326 EAL: Selected IOVA mode 'VA' 00:07:43.326 EAL: Probing VFIO support... 00:07:43.326 EAL: IOMMU type 1 (Type 1) is supported 00:07:43.326 EAL: IOMMU type 7 (sPAPR) is not supported 00:07:43.326 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:07:43.326 EAL: VFIO support initialized 00:07:43.326 EAL: Ask a virtual area of 0x2e000 bytes 00:07:43.326 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:07:43.326 EAL: Setting up physically contiguous memory... 00:07:43.326 EAL: Setting maximum number of open files to 524288 00:07:43.326 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:07:43.326 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:07:43.326 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:07:43.326 EAL: Ask a virtual area of 0x61000 bytes 00:07:43.326 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:07:43.326 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:43.326 EAL: Ask a virtual area of 0x400000000 bytes 00:07:43.326 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:07:43.326 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:07:43.326 EAL: Ask a virtual area of 0x61000 bytes 00:07:43.326 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:07:43.326 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:43.326 EAL: Ask a virtual area of 0x400000000 bytes 00:07:43.326 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:07:43.326 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:07:43.326 EAL: Ask a virtual area of 0x61000 bytes 00:07:43.326 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:07:43.326 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:43.326 EAL: Ask a virtual area of 0x400000000 bytes 00:07:43.326 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:07:43.326 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:07:43.326 EAL: Ask a virtual area of 0x61000 bytes 00:07:43.326 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:07:43.326 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:07:43.326 EAL: Ask a virtual area of 0x400000000 bytes 00:07:43.326 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:07:43.326 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:07:43.326 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:07:43.326 EAL: Ask a virtual area of 0x61000 bytes 00:07:43.326 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:07:43.326 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:43.326 EAL: Ask a virtual area of 0x400000000 bytes 00:07:43.326 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:07:43.326 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:07:43.326 EAL: Ask a virtual area of 0x61000 bytes 00:07:43.326 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:07:43.326 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:43.326 EAL: Ask a virtual area of 0x400000000 bytes 00:07:43.326 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:07:43.326 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:07:43.326 EAL: Ask a virtual area of 0x61000 bytes 00:07:43.326 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:07:43.326 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:43.326 EAL: Ask a virtual area of 0x400000000 bytes 00:07:43.326 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:07:43.326 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:07:43.326 EAL: Ask a virtual area of 0x61000 bytes 00:07:43.326 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:07:43.326 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:07:43.326 EAL: Ask a virtual area of 0x400000000 bytes 00:07:43.326 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:07:43.326 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:07:43.326 EAL: Hugepages will be freed exactly as allocated. 00:07:43.326 EAL: No shared files mode enabled, IPC is disabled 00:07:43.326 EAL: No shared files mode enabled, IPC is disabled 00:07:43.326 EAL: TSC frequency is ~2700000 KHz 00:07:43.327 EAL: Main lcore 0 is ready (tid=7f95a3f81a00;cpuset=[0]) 00:07:43.327 EAL: Trying to obtain current memory policy. 00:07:43.327 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.327 EAL: Restoring previous memory policy: 0 00:07:43.327 EAL: request: mp_malloc_sync 00:07:43.327 EAL: No shared files mode enabled, IPC is disabled 00:07:43.327 EAL: Heap on socket 0 was expanded by 2MB 00:07:43.327 EAL: No shared files mode enabled, IPC is disabled 00:07:43.327 EAL: No PCI address specified using 'addr=' in: bus=pci 00:07:43.327 EAL: Mem event callback 'spdk:(nil)' registered 00:07:43.327 00:07:43.327 00:07:43.327 CUnit - A unit testing framework for C - Version 2.1-3 00:07:43.327 http://cunit.sourceforge.net/ 00:07:43.327 00:07:43.327 00:07:43.327 Suite: components_suite 00:07:43.327 Test: vtophys_malloc_test ...passed 00:07:43.327 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:07:43.327 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.327 EAL: Restoring previous memory policy: 4 00:07:43.327 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.327 EAL: request: mp_malloc_sync 00:07:43.327 EAL: No shared files mode enabled, IPC is disabled 00:07:43.327 EAL: Heap on socket 0 was expanded by 4MB 00:07:43.327 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.327 EAL: request: mp_malloc_sync 00:07:43.327 EAL: No shared files mode enabled, IPC is disabled 00:07:43.327 EAL: Heap on socket 0 was shrunk by 4MB 00:07:43.327 EAL: Trying to obtain current memory policy. 00:07:43.327 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.327 EAL: Restoring previous memory policy: 4 00:07:43.327 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.327 EAL: request: mp_malloc_sync 00:07:43.327 EAL: No shared files mode enabled, IPC is disabled 00:07:43.327 EAL: Heap on socket 0 was expanded by 6MB 00:07:43.327 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.327 EAL: request: mp_malloc_sync 00:07:43.327 EAL: No shared files mode enabled, IPC is disabled 00:07:43.327 EAL: Heap on socket 0 was shrunk by 6MB 00:07:43.327 EAL: Trying to obtain current memory policy. 00:07:43.327 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.327 EAL: Restoring previous memory policy: 4 00:07:43.327 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.327 EAL: request: mp_malloc_sync 00:07:43.327 EAL: No shared files mode enabled, IPC is disabled 00:07:43.327 EAL: Heap on socket 0 was expanded by 10MB 00:07:43.327 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.327 EAL: request: mp_malloc_sync 00:07:43.327 EAL: No shared files mode enabled, IPC is disabled 00:07:43.327 EAL: Heap on socket 0 was shrunk by 10MB 00:07:43.327 EAL: Trying to obtain current memory policy. 00:07:43.327 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.327 EAL: Restoring previous memory policy: 4 00:07:43.327 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.327 EAL: request: mp_malloc_sync 00:07:43.327 EAL: No shared files mode enabled, IPC is disabled 00:07:43.327 EAL: Heap on socket 0 was expanded by 18MB 00:07:43.327 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.327 EAL: request: mp_malloc_sync 00:07:43.327 EAL: No shared files mode enabled, IPC is disabled 00:07:43.327 EAL: Heap on socket 0 was shrunk by 18MB 00:07:43.327 EAL: Trying to obtain current memory policy. 00:07:43.327 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.585 EAL: Restoring previous memory policy: 4 00:07:43.585 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.585 EAL: request: mp_malloc_sync 00:07:43.585 EAL: No shared files mode enabled, IPC is disabled 00:07:43.585 EAL: Heap on socket 0 was expanded by 34MB 00:07:43.585 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.585 EAL: request: mp_malloc_sync 00:07:43.585 EAL: No shared files mode enabled, IPC is disabled 00:07:43.585 EAL: Heap on socket 0 was shrunk by 34MB 00:07:43.585 EAL: Trying to obtain current memory policy. 00:07:43.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.585 EAL: Restoring previous memory policy: 4 00:07:43.585 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.585 EAL: request: mp_malloc_sync 00:07:43.585 EAL: No shared files mode enabled, IPC is disabled 00:07:43.585 EAL: Heap on socket 0 was expanded by 66MB 00:07:43.585 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.585 EAL: request: mp_malloc_sync 00:07:43.585 EAL: No shared files mode enabled, IPC is disabled 00:07:43.585 EAL: Heap on socket 0 was shrunk by 66MB 00:07:43.585 EAL: Trying to obtain current memory policy. 00:07:43.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.585 EAL: Restoring previous memory policy: 4 00:07:43.585 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.585 EAL: request: mp_malloc_sync 00:07:43.585 EAL: No shared files mode enabled, IPC is disabled 00:07:43.585 EAL: Heap on socket 0 was expanded by 130MB 00:07:43.585 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.585 EAL: request: mp_malloc_sync 00:07:43.585 EAL: No shared files mode enabled, IPC is disabled 00:07:43.585 EAL: Heap on socket 0 was shrunk by 130MB 00:07:43.585 EAL: Trying to obtain current memory policy. 00:07:43.585 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.585 EAL: Restoring previous memory policy: 4 00:07:43.585 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.585 EAL: request: mp_malloc_sync 00:07:43.585 EAL: No shared files mode enabled, IPC is disabled 00:07:43.585 EAL: Heap on socket 0 was expanded by 258MB 00:07:43.585 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.842 EAL: request: mp_malloc_sync 00:07:43.842 EAL: No shared files mode enabled, IPC is disabled 00:07:43.842 EAL: Heap on socket 0 was shrunk by 258MB 00:07:43.842 EAL: Trying to obtain current memory policy. 00:07:43.842 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:43.842 EAL: Restoring previous memory policy: 4 00:07:43.842 EAL: Calling mem event callback 'spdk:(nil)' 00:07:43.842 EAL: request: mp_malloc_sync 00:07:43.842 EAL: No shared files mode enabled, IPC is disabled 00:07:43.842 EAL: Heap on socket 0 was expanded by 514MB 00:07:44.100 EAL: Calling mem event callback 'spdk:(nil)' 00:07:44.100 EAL: request: mp_malloc_sync 00:07:44.100 EAL: No shared files mode enabled, IPC is disabled 00:07:44.100 EAL: Heap on socket 0 was shrunk by 514MB 00:07:44.100 EAL: Trying to obtain current memory policy. 00:07:44.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:07:44.377 EAL: Restoring previous memory policy: 4 00:07:44.377 EAL: Calling mem event callback 'spdk:(nil)' 00:07:44.377 EAL: request: mp_malloc_sync 00:07:44.377 EAL: No shared files mode enabled, IPC is disabled 00:07:44.377 EAL: Heap on socket 0 was expanded by 1026MB 00:07:44.636 EAL: Calling mem event callback 'spdk:(nil)' 00:07:44.896 EAL: request: mp_malloc_sync 00:07:44.896 EAL: No shared files mode enabled, IPC is disabled 00:07:44.896 EAL: Heap on socket 0 was shrunk by 1026MB 00:07:44.896 passed 00:07:44.896 00:07:44.896 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.896 suites 1 1 n/a 0 0 00:07:44.896 tests 2 2 2 0 0 00:07:44.896 asserts 497 497 497 0 n/a 00:07:44.896 00:07:44.896 Elapsed time = 1.337 seconds 00:07:44.896 EAL: Calling mem event callback 'spdk:(nil)' 00:07:44.896 EAL: request: mp_malloc_sync 00:07:44.896 EAL: No shared files mode enabled, IPC is disabled 00:07:44.896 EAL: Heap on socket 0 was shrunk by 2MB 00:07:44.896 EAL: No shared files mode enabled, IPC is disabled 00:07:44.896 EAL: No shared files mode enabled, IPC is disabled 00:07:44.896 EAL: No shared files mode enabled, IPC is disabled 00:07:44.896 00:07:44.896 real 0m1.459s 00:07:44.896 user 0m0.852s 00:07:44.896 sys 0m0.569s 00:07:44.896 06:18:16 env.env_vtophys -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.896 06:18:16 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:07:44.896 ************************************ 00:07:44.896 END TEST env_vtophys 00:07:44.896 ************************************ 00:07:44.896 06:18:16 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:44.896 06:18:16 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:44.896 06:18:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.896 06:18:16 env -- common/autotest_common.sh@10 -- # set +x 00:07:44.896 ************************************ 00:07:44.896 START TEST env_pci 00:07:44.896 ************************************ 00:07:44.896 06:18:16 env.env_pci -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:07:44.896 00:07:44.896 00:07:44.896 CUnit - A unit testing framework for C - Version 2.1-3 00:07:44.896 http://cunit.sourceforge.net/ 00:07:44.896 00:07:44.896 00:07:44.896 Suite: pci 00:07:44.896 Test: pci_hook ...[2024-11-20 06:18:16.590406] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1964625 has claimed it 00:07:44.896 EAL: Cannot find device (10000:00:01.0) 00:07:44.896 EAL: Failed to attach device on primary process 00:07:44.896 passed 00:07:44.896 00:07:44.897 Run Summary: Type Total Ran Passed Failed Inactive 00:07:44.897 suites 1 1 n/a 0 0 00:07:44.897 tests 1 1 1 0 0 00:07:44.897 asserts 25 25 25 0 n/a 00:07:44.897 00:07:44.897 Elapsed time = 0.022 seconds 00:07:44.897 00:07:44.897 real 0m0.036s 00:07:44.897 user 0m0.013s 00:07:44.897 sys 0m0.023s 00:07:44.897 06:18:16 env.env_pci -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:44.897 06:18:16 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:07:44.897 ************************************ 00:07:44.897 END TEST env_pci 00:07:44.897 ************************************ 00:07:44.897 06:18:16 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:07:44.897 06:18:16 env -- env/env.sh@15 -- # uname 00:07:44.897 06:18:16 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:07:44.897 06:18:16 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:07:44.897 06:18:16 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:44.897 06:18:16 env -- common/autotest_common.sh@1103 -- # '[' 5 -le 1 ']' 00:07:44.897 06:18:16 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:44.897 06:18:16 env -- common/autotest_common.sh@10 -- # set +x 00:07:44.897 ************************************ 00:07:44.897 START TEST env_dpdk_post_init 00:07:44.897 ************************************ 00:07:44.897 06:18:16 env.env_dpdk_post_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:07:44.897 EAL: Detected CPU lcores: 48 00:07:44.897 EAL: Detected NUMA nodes: 2 00:07:44.897 EAL: Detected shared linkage of DPDK 00:07:44.897 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:44.897 EAL: Selected IOVA mode 'VA' 00:07:44.897 EAL: VFIO support initialized 00:07:44.897 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:45.157 EAL: Using IOMMU type 1 (Type 1) 00:07:45.157 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:07:45.157 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:07:45.157 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:07:45.157 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:07:45.158 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:07:45.158 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:07:45.158 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:07:45.158 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:07:46.096 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:0b:00.0 (socket 0) 00:07:46.096 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:07:46.096 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:07:46.096 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:07:46.096 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:07:46.096 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:07:46.096 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:07:46.096 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:07:46.096 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:07:49.379 EAL: Releasing PCI mapped resource for 0000:0b:00.0 00:07:49.379 EAL: Calling pci_unmap_resource for 0000:0b:00.0 at 0x202001020000 00:07:49.379 Starting DPDK initialization... 00:07:49.379 Starting SPDK post initialization... 00:07:49.379 SPDK NVMe probe 00:07:49.379 Attaching to 0000:0b:00.0 00:07:49.379 Attached to 0000:0b:00.0 00:07:49.379 Cleaning up... 00:07:49.379 00:07:49.379 real 0m4.381s 00:07:49.379 user 0m3.003s 00:07:49.379 sys 0m0.432s 00:07:49.379 06:18:21 env.env_dpdk_post_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.379 06:18:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:07:49.379 ************************************ 00:07:49.379 END TEST env_dpdk_post_init 00:07:49.379 ************************************ 00:07:49.379 06:18:21 env -- env/env.sh@26 -- # uname 00:07:49.379 06:18:21 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:07:49.379 06:18:21 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:49.379 06:18:21 env -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:49.379 06:18:21 env -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.379 06:18:21 env -- common/autotest_common.sh@10 -- # set +x 00:07:49.379 ************************************ 00:07:49.379 START TEST env_mem_callbacks 00:07:49.379 ************************************ 00:07:49.379 06:18:21 env.env_mem_callbacks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:07:49.379 EAL: Detected CPU lcores: 48 00:07:49.379 EAL: Detected NUMA nodes: 2 00:07:49.379 EAL: Detected shared linkage of DPDK 00:07:49.379 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:07:49.379 EAL: Selected IOVA mode 'VA' 00:07:49.379 EAL: VFIO support initialized 00:07:49.379 TELEMETRY: No legacy callbacks, legacy socket not created 00:07:49.379 00:07:49.379 00:07:49.379 CUnit - A unit testing framework for C - Version 2.1-3 00:07:49.379 http://cunit.sourceforge.net/ 00:07:49.379 00:07:49.379 00:07:49.379 Suite: memory 00:07:49.379 Test: test ... 00:07:49.379 register 0x200000200000 2097152 00:07:49.379 malloc 3145728 00:07:49.379 register 0x200000400000 4194304 00:07:49.379 buf 0x200000500000 len 3145728 PASSED 00:07:49.379 malloc 64 00:07:49.379 buf 0x2000004fff40 len 64 PASSED 00:07:49.379 malloc 4194304 00:07:49.379 register 0x200000800000 6291456 00:07:49.379 buf 0x200000a00000 len 4194304 PASSED 00:07:49.379 free 0x200000500000 3145728 00:07:49.379 free 0x2000004fff40 64 00:07:49.379 unregister 0x200000400000 4194304 PASSED 00:07:49.379 free 0x200000a00000 4194304 00:07:49.379 unregister 0x200000800000 6291456 PASSED 00:07:49.379 malloc 8388608 00:07:49.379 register 0x200000400000 10485760 00:07:49.379 buf 0x200000600000 len 8388608 PASSED 00:07:49.379 free 0x200000600000 8388608 00:07:49.379 unregister 0x200000400000 10485760 PASSED 00:07:49.379 passed 00:07:49.379 00:07:49.379 Run Summary: Type Total Ran Passed Failed Inactive 00:07:49.379 suites 1 1 n/a 0 0 00:07:49.379 tests 1 1 1 0 0 00:07:49.379 asserts 15 15 15 0 n/a 00:07:49.379 00:07:49.379 Elapsed time = 0.005 seconds 00:07:49.379 00:07:49.379 real 0m0.048s 00:07:49.379 user 0m0.016s 00:07:49.379 sys 0m0.032s 00:07:49.379 06:18:21 env.env_mem_callbacks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.379 06:18:21 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:07:49.379 ************************************ 00:07:49.379 END TEST env_mem_callbacks 00:07:49.379 ************************************ 00:07:49.379 00:07:49.379 real 0m6.484s 00:07:49.379 user 0m4.239s 00:07:49.379 sys 0m1.284s 00:07:49.379 06:18:21 env -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:49.379 06:18:21 env -- common/autotest_common.sh@10 -- # set +x 00:07:49.379 ************************************ 00:07:49.379 END TEST env 00:07:49.379 ************************************ 00:07:49.379 06:18:21 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:49.379 06:18:21 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:49.379 06:18:21 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:49.379 06:18:21 -- common/autotest_common.sh@10 -- # set +x 00:07:49.638 ************************************ 00:07:49.638 START TEST rpc 00:07:49.638 ************************************ 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:07:49.638 * Looking for test storage... 00:07:49.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:49.638 06:18:21 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:49.638 06:18:21 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:49.638 06:18:21 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:49.638 06:18:21 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:49.638 06:18:21 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:49.638 06:18:21 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:49.638 06:18:21 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:49.638 06:18:21 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:49.638 06:18:21 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:49.638 06:18:21 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:49.638 06:18:21 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:49.638 06:18:21 rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:49.638 06:18:21 rpc -- scripts/common.sh@345 -- # : 1 00:07:49.638 06:18:21 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:49.638 06:18:21 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:49.638 06:18:21 rpc -- scripts/common.sh@365 -- # decimal 1 00:07:49.638 06:18:21 rpc -- scripts/common.sh@353 -- # local d=1 00:07:49.638 06:18:21 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:49.638 06:18:21 rpc -- scripts/common.sh@355 -- # echo 1 00:07:49.638 06:18:21 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:49.638 06:18:21 rpc -- scripts/common.sh@366 -- # decimal 2 00:07:49.638 06:18:21 rpc -- scripts/common.sh@353 -- # local d=2 00:07:49.638 06:18:21 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:49.638 06:18:21 rpc -- scripts/common.sh@355 -- # echo 2 00:07:49.638 06:18:21 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:49.638 06:18:21 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:49.638 06:18:21 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:49.638 06:18:21 rpc -- scripts/common.sh@368 -- # return 0 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:49.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.638 --rc genhtml_branch_coverage=1 00:07:49.638 --rc genhtml_function_coverage=1 00:07:49.638 --rc genhtml_legend=1 00:07:49.638 --rc geninfo_all_blocks=1 00:07:49.638 --rc geninfo_unexecuted_blocks=1 00:07:49.638 00:07:49.638 ' 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:49.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.638 --rc genhtml_branch_coverage=1 00:07:49.638 --rc genhtml_function_coverage=1 00:07:49.638 --rc genhtml_legend=1 00:07:49.638 --rc geninfo_all_blocks=1 00:07:49.638 --rc geninfo_unexecuted_blocks=1 00:07:49.638 00:07:49.638 ' 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:49.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.638 --rc genhtml_branch_coverage=1 00:07:49.638 --rc genhtml_function_coverage=1 00:07:49.638 --rc genhtml_legend=1 00:07:49.638 --rc geninfo_all_blocks=1 00:07:49.638 --rc geninfo_unexecuted_blocks=1 00:07:49.638 00:07:49.638 ' 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:49.638 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:49.638 --rc genhtml_branch_coverage=1 00:07:49.638 --rc genhtml_function_coverage=1 00:07:49.638 --rc genhtml_legend=1 00:07:49.638 --rc geninfo_all_blocks=1 00:07:49.638 --rc geninfo_unexecuted_blocks=1 00:07:49.638 00:07:49.638 ' 00:07:49.638 06:18:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1965392 00:07:49.638 06:18:21 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:07:49.638 06:18:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:49.638 06:18:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1965392 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@833 -- # '[' -z 1965392 ']' 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:49.638 06:18:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.638 [2024-11-20 06:18:21.417350] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:07:49.638 [2024-11-20 06:18:21.417430] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965392 ] 00:07:49.897 [2024-11-20 06:18:21.484243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:49.897 [2024-11-20 06:18:21.541038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:07:49.897 [2024-11-20 06:18:21.541093] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1965392' to capture a snapshot of events at runtime. 00:07:49.897 [2024-11-20 06:18:21.541106] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:49.897 [2024-11-20 06:18:21.541116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:49.897 [2024-11-20 06:18:21.541125] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1965392 for offline analysis/debug. 00:07:49.897 [2024-11-20 06:18:21.541692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.156 06:18:21 rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:50.156 06:18:21 rpc -- common/autotest_common.sh@866 -- # return 0 00:07:50.156 06:18:21 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:50.156 06:18:21 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:50.156 06:18:21 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:07:50.156 06:18:21 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:07:50.156 06:18:21 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:50.156 06:18:21 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.156 06:18:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.156 ************************************ 00:07:50.156 START TEST rpc_integrity 00:07:50.156 ************************************ 00:07:50.156 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:50.156 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:50.156 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.156 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.156 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.156 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:50.156 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:50.156 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:50.156 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:50.156 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.156 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.156 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.156 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:07:50.156 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:50.156 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.156 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.156 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.157 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:50.157 { 00:07:50.157 "name": "Malloc0", 00:07:50.157 "aliases": [ 00:07:50.157 "8f773a40-da99-4669-93f9-015890144921" 00:07:50.157 ], 00:07:50.157 "product_name": "Malloc disk", 00:07:50.157 "block_size": 512, 00:07:50.157 "num_blocks": 16384, 00:07:50.157 "uuid": "8f773a40-da99-4669-93f9-015890144921", 00:07:50.157 "assigned_rate_limits": { 00:07:50.157 "rw_ios_per_sec": 0, 00:07:50.157 "rw_mbytes_per_sec": 0, 00:07:50.157 "r_mbytes_per_sec": 0, 00:07:50.157 "w_mbytes_per_sec": 0 00:07:50.157 }, 00:07:50.157 "claimed": false, 00:07:50.157 "zoned": false, 00:07:50.157 "supported_io_types": { 00:07:50.157 "read": true, 00:07:50.157 "write": true, 00:07:50.157 "unmap": true, 00:07:50.157 "flush": true, 00:07:50.157 "reset": true, 00:07:50.157 "nvme_admin": false, 00:07:50.157 "nvme_io": false, 00:07:50.157 "nvme_io_md": false, 00:07:50.157 "write_zeroes": true, 00:07:50.157 "zcopy": true, 00:07:50.157 "get_zone_info": false, 00:07:50.157 "zone_management": false, 00:07:50.157 "zone_append": false, 00:07:50.157 "compare": false, 00:07:50.157 "compare_and_write": false, 00:07:50.157 "abort": true, 00:07:50.157 "seek_hole": false, 00:07:50.157 "seek_data": false, 00:07:50.157 "copy": true, 00:07:50.157 "nvme_iov_md": false 00:07:50.157 }, 00:07:50.157 "memory_domains": [ 00:07:50.157 { 00:07:50.157 "dma_device_id": "system", 00:07:50.157 "dma_device_type": 1 00:07:50.157 }, 00:07:50.157 { 00:07:50.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.157 "dma_device_type": 2 00:07:50.157 } 00:07:50.157 ], 00:07:50.157 "driver_specific": {} 00:07:50.157 } 00:07:50.157 ]' 00:07:50.157 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:50.157 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:50.157 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:07:50.157 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.157 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.157 [2024-11-20 06:18:21.927646] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:07:50.157 [2024-11-20 06:18:21.927699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.157 [2024-11-20 06:18:21.927721] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x265bd20 00:07:50.157 [2024-11-20 06:18:21.927733] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.157 [2024-11-20 06:18:21.929037] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.157 [2024-11-20 06:18:21.929061] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:50.157 Passthru0 00:07:50.157 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.157 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:50.157 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.157 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.157 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.157 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:50.157 { 00:07:50.157 "name": "Malloc0", 00:07:50.157 "aliases": [ 00:07:50.157 "8f773a40-da99-4669-93f9-015890144921" 00:07:50.157 ], 00:07:50.157 "product_name": "Malloc disk", 00:07:50.157 "block_size": 512, 00:07:50.157 "num_blocks": 16384, 00:07:50.157 "uuid": "8f773a40-da99-4669-93f9-015890144921", 00:07:50.157 "assigned_rate_limits": { 00:07:50.157 "rw_ios_per_sec": 0, 00:07:50.157 "rw_mbytes_per_sec": 0, 00:07:50.157 "r_mbytes_per_sec": 0, 00:07:50.157 "w_mbytes_per_sec": 0 00:07:50.157 }, 00:07:50.157 "claimed": true, 00:07:50.157 "claim_type": "exclusive_write", 00:07:50.157 "zoned": false, 00:07:50.157 "supported_io_types": { 00:07:50.157 "read": true, 00:07:50.157 "write": true, 00:07:50.157 "unmap": true, 00:07:50.157 "flush": true, 00:07:50.157 "reset": true, 00:07:50.157 "nvme_admin": false, 00:07:50.157 "nvme_io": false, 00:07:50.157 "nvme_io_md": false, 00:07:50.157 "write_zeroes": true, 00:07:50.157 "zcopy": true, 00:07:50.157 "get_zone_info": false, 00:07:50.157 "zone_management": false, 00:07:50.157 "zone_append": false, 00:07:50.157 "compare": false, 00:07:50.157 "compare_and_write": false, 00:07:50.157 "abort": true, 00:07:50.157 "seek_hole": false, 00:07:50.157 "seek_data": false, 00:07:50.157 "copy": true, 00:07:50.157 "nvme_iov_md": false 00:07:50.157 }, 00:07:50.157 "memory_domains": [ 00:07:50.157 { 00:07:50.157 "dma_device_id": "system", 00:07:50.157 "dma_device_type": 1 00:07:50.157 }, 00:07:50.157 { 00:07:50.157 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.157 "dma_device_type": 2 00:07:50.157 } 00:07:50.157 ], 00:07:50.157 "driver_specific": {} 00:07:50.157 }, 00:07:50.157 { 00:07:50.157 "name": "Passthru0", 00:07:50.157 "aliases": [ 00:07:50.157 "78f17b97-0af6-54ee-948f-34db2dcf4c57" 00:07:50.157 ], 00:07:50.157 "product_name": "passthru", 00:07:50.157 "block_size": 512, 00:07:50.157 "num_blocks": 16384, 00:07:50.157 "uuid": "78f17b97-0af6-54ee-948f-34db2dcf4c57", 00:07:50.157 "assigned_rate_limits": { 00:07:50.157 "rw_ios_per_sec": 0, 00:07:50.157 "rw_mbytes_per_sec": 0, 00:07:50.157 "r_mbytes_per_sec": 0, 00:07:50.157 "w_mbytes_per_sec": 0 00:07:50.157 }, 00:07:50.157 "claimed": false, 00:07:50.157 "zoned": false, 00:07:50.158 "supported_io_types": { 00:07:50.158 "read": true, 00:07:50.158 "write": true, 00:07:50.158 "unmap": true, 00:07:50.158 "flush": true, 00:07:50.158 "reset": true, 00:07:50.158 "nvme_admin": false, 00:07:50.158 "nvme_io": false, 00:07:50.158 "nvme_io_md": false, 00:07:50.158 "write_zeroes": true, 00:07:50.158 "zcopy": true, 00:07:50.158 "get_zone_info": false, 00:07:50.158 "zone_management": false, 00:07:50.158 "zone_append": false, 00:07:50.158 "compare": false, 00:07:50.158 "compare_and_write": false, 00:07:50.158 "abort": true, 00:07:50.158 "seek_hole": false, 00:07:50.158 "seek_data": false, 00:07:50.158 "copy": true, 00:07:50.158 "nvme_iov_md": false 00:07:50.158 }, 00:07:50.158 "memory_domains": [ 00:07:50.158 { 00:07:50.158 "dma_device_id": "system", 00:07:50.158 "dma_device_type": 1 00:07:50.158 }, 00:07:50.158 { 00:07:50.158 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.158 "dma_device_type": 2 00:07:50.158 } 00:07:50.158 ], 00:07:50.158 "driver_specific": { 00:07:50.158 "passthru": { 00:07:50.158 "name": "Passthru0", 00:07:50.158 "base_bdev_name": "Malloc0" 00:07:50.158 } 00:07:50.158 } 00:07:50.158 } 00:07:50.158 ]' 00:07:50.158 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:50.158 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:50.158 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:50.158 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.158 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.158 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.158 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:07:50.158 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.158 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.417 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.417 06:18:21 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:50.417 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.417 06:18:21 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.417 06:18:22 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.417 06:18:22 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:50.417 06:18:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:50.417 06:18:22 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:50.417 00:07:50.417 real 0m0.209s 00:07:50.417 user 0m0.135s 00:07:50.417 sys 0m0.020s 00:07:50.417 06:18:22 rpc.rpc_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.417 06:18:22 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.417 ************************************ 00:07:50.417 END TEST rpc_integrity 00:07:50.417 ************************************ 00:07:50.417 06:18:22 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:07:50.417 06:18:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:50.417 06:18:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.417 06:18:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.417 ************************************ 00:07:50.417 START TEST rpc_plugins 00:07:50.417 ************************************ 00:07:50.417 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@1127 -- # rpc_plugins 00:07:50.417 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:07:50.417 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.417 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:50.417 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.417 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:07:50.417 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:07:50.417 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.417 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:50.417 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.417 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:07:50.417 { 00:07:50.417 "name": "Malloc1", 00:07:50.417 "aliases": [ 00:07:50.417 "c742ab28-681b-4e03-bfad-3867b08c9ff9" 00:07:50.417 ], 00:07:50.417 "product_name": "Malloc disk", 00:07:50.417 "block_size": 4096, 00:07:50.417 "num_blocks": 256, 00:07:50.417 "uuid": "c742ab28-681b-4e03-bfad-3867b08c9ff9", 00:07:50.417 "assigned_rate_limits": { 00:07:50.417 "rw_ios_per_sec": 0, 00:07:50.417 "rw_mbytes_per_sec": 0, 00:07:50.418 "r_mbytes_per_sec": 0, 00:07:50.418 "w_mbytes_per_sec": 0 00:07:50.418 }, 00:07:50.418 "claimed": false, 00:07:50.418 "zoned": false, 00:07:50.418 "supported_io_types": { 00:07:50.418 "read": true, 00:07:50.418 "write": true, 00:07:50.418 "unmap": true, 00:07:50.418 "flush": true, 00:07:50.418 "reset": true, 00:07:50.418 "nvme_admin": false, 00:07:50.418 "nvme_io": false, 00:07:50.418 "nvme_io_md": false, 00:07:50.418 "write_zeroes": true, 00:07:50.418 "zcopy": true, 00:07:50.418 "get_zone_info": false, 00:07:50.418 "zone_management": false, 00:07:50.418 "zone_append": false, 00:07:50.418 "compare": false, 00:07:50.418 "compare_and_write": false, 00:07:50.418 "abort": true, 00:07:50.418 "seek_hole": false, 00:07:50.418 "seek_data": false, 00:07:50.418 "copy": true, 00:07:50.418 "nvme_iov_md": false 00:07:50.418 }, 00:07:50.418 "memory_domains": [ 00:07:50.418 { 00:07:50.418 "dma_device_id": "system", 00:07:50.418 "dma_device_type": 1 00:07:50.418 }, 00:07:50.418 { 00:07:50.418 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.418 "dma_device_type": 2 00:07:50.418 } 00:07:50.418 ], 00:07:50.418 "driver_specific": {} 00:07:50.418 } 00:07:50.418 ]' 00:07:50.418 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:07:50.418 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:07:50.418 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:07:50.418 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.418 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:50.418 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.418 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:07:50.418 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.418 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:50.418 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.418 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:07:50.418 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:07:50.418 06:18:22 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:07:50.418 00:07:50.418 real 0m0.112s 00:07:50.418 user 0m0.070s 00:07:50.418 sys 0m0.010s 00:07:50.418 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.418 06:18:22 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:07:50.418 ************************************ 00:07:50.418 END TEST rpc_plugins 00:07:50.418 ************************************ 00:07:50.418 06:18:22 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:07:50.418 06:18:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:50.418 06:18:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.418 06:18:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.418 ************************************ 00:07:50.418 START TEST rpc_trace_cmd_test 00:07:50.418 ************************************ 00:07:50.418 06:18:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1127 -- # rpc_trace_cmd_test 00:07:50.418 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:07:50.418 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:07:50.418 06:18:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.418 06:18:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:07:50.676 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1965392", 00:07:50.676 "tpoint_group_mask": "0x8", 00:07:50.676 "iscsi_conn": { 00:07:50.676 "mask": "0x2", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "scsi": { 00:07:50.676 "mask": "0x4", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "bdev": { 00:07:50.676 "mask": "0x8", 00:07:50.676 "tpoint_mask": "0xffffffffffffffff" 00:07:50.676 }, 00:07:50.676 "nvmf_rdma": { 00:07:50.676 "mask": "0x10", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "nvmf_tcp": { 00:07:50.676 "mask": "0x20", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "ftl": { 00:07:50.676 "mask": "0x40", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "blobfs": { 00:07:50.676 "mask": "0x80", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "dsa": { 00:07:50.676 "mask": "0x200", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "thread": { 00:07:50.676 "mask": "0x400", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "nvme_pcie": { 00:07:50.676 "mask": "0x800", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "iaa": { 00:07:50.676 "mask": "0x1000", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "nvme_tcp": { 00:07:50.676 "mask": "0x2000", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "bdev_nvme": { 00:07:50.676 "mask": "0x4000", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "sock": { 00:07:50.676 "mask": "0x8000", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "blob": { 00:07:50.676 "mask": "0x10000", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "bdev_raid": { 00:07:50.676 "mask": "0x20000", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 }, 00:07:50.676 "scheduler": { 00:07:50.676 "mask": "0x40000", 00:07:50.676 "tpoint_mask": "0x0" 00:07:50.676 } 00:07:50.676 }' 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:07:50.676 00:07:50.676 real 0m0.181s 00:07:50.676 user 0m0.159s 00:07:50.676 sys 0m0.015s 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.676 06:18:22 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.676 ************************************ 00:07:50.676 END TEST rpc_trace_cmd_test 00:07:50.676 ************************************ 00:07:50.676 06:18:22 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:07:50.676 06:18:22 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:07:50.676 06:18:22 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:07:50.676 06:18:22 rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:50.676 06:18:22 rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:50.676 06:18:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.676 ************************************ 00:07:50.676 START TEST rpc_daemon_integrity 00:07:50.676 ************************************ 00:07:50.676 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1127 -- # rpc_integrity 00:07:50.676 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:07:50.676 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.676 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.676 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.676 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:07:50.676 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.934 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:07:50.934 { 00:07:50.934 "name": "Malloc2", 00:07:50.934 "aliases": [ 00:07:50.934 "f247ff91-58a9-4fe5-95ae-70678394d807" 00:07:50.934 ], 00:07:50.934 "product_name": "Malloc disk", 00:07:50.934 "block_size": 512, 00:07:50.934 "num_blocks": 16384, 00:07:50.934 "uuid": "f247ff91-58a9-4fe5-95ae-70678394d807", 00:07:50.934 "assigned_rate_limits": { 00:07:50.934 "rw_ios_per_sec": 0, 00:07:50.934 "rw_mbytes_per_sec": 0, 00:07:50.934 "r_mbytes_per_sec": 0, 00:07:50.934 "w_mbytes_per_sec": 0 00:07:50.934 }, 00:07:50.934 "claimed": false, 00:07:50.934 "zoned": false, 00:07:50.934 "supported_io_types": { 00:07:50.934 "read": true, 00:07:50.934 "write": true, 00:07:50.934 "unmap": true, 00:07:50.934 "flush": true, 00:07:50.934 "reset": true, 00:07:50.934 "nvme_admin": false, 00:07:50.934 "nvme_io": false, 00:07:50.934 "nvme_io_md": false, 00:07:50.934 "write_zeroes": true, 00:07:50.934 "zcopy": true, 00:07:50.934 "get_zone_info": false, 00:07:50.934 "zone_management": false, 00:07:50.934 "zone_append": false, 00:07:50.934 "compare": false, 00:07:50.934 "compare_and_write": false, 00:07:50.934 "abort": true, 00:07:50.934 "seek_hole": false, 00:07:50.934 "seek_data": false, 00:07:50.934 "copy": true, 00:07:50.934 "nvme_iov_md": false 00:07:50.934 }, 00:07:50.934 "memory_domains": [ 00:07:50.934 { 00:07:50.934 "dma_device_id": "system", 00:07:50.934 "dma_device_type": 1 00:07:50.934 }, 00:07:50.935 { 00:07:50.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.935 "dma_device_type": 2 00:07:50.935 } 00:07:50.935 ], 00:07:50.935 "driver_specific": {} 00:07:50.935 } 00:07:50.935 ]' 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.935 [2024-11-20 06:18:22.569791] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:07:50.935 [2024-11-20 06:18:22.569828] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:50.935 [2024-11-20 06:18:22.569849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2518f10 00:07:50.935 [2024-11-20 06:18:22.569862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:50.935 [2024-11-20 06:18:22.571056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:50.935 [2024-11-20 06:18:22.571080] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:07:50.935 Passthru0 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:07:50.935 { 00:07:50.935 "name": "Malloc2", 00:07:50.935 "aliases": [ 00:07:50.935 "f247ff91-58a9-4fe5-95ae-70678394d807" 00:07:50.935 ], 00:07:50.935 "product_name": "Malloc disk", 00:07:50.935 "block_size": 512, 00:07:50.935 "num_blocks": 16384, 00:07:50.935 "uuid": "f247ff91-58a9-4fe5-95ae-70678394d807", 00:07:50.935 "assigned_rate_limits": { 00:07:50.935 "rw_ios_per_sec": 0, 00:07:50.935 "rw_mbytes_per_sec": 0, 00:07:50.935 "r_mbytes_per_sec": 0, 00:07:50.935 "w_mbytes_per_sec": 0 00:07:50.935 }, 00:07:50.935 "claimed": true, 00:07:50.935 "claim_type": "exclusive_write", 00:07:50.935 "zoned": false, 00:07:50.935 "supported_io_types": { 00:07:50.935 "read": true, 00:07:50.935 "write": true, 00:07:50.935 "unmap": true, 00:07:50.935 "flush": true, 00:07:50.935 "reset": true, 00:07:50.935 "nvme_admin": false, 00:07:50.935 "nvme_io": false, 00:07:50.935 "nvme_io_md": false, 00:07:50.935 "write_zeroes": true, 00:07:50.935 "zcopy": true, 00:07:50.935 "get_zone_info": false, 00:07:50.935 "zone_management": false, 00:07:50.935 "zone_append": false, 00:07:50.935 "compare": false, 00:07:50.935 "compare_and_write": false, 00:07:50.935 "abort": true, 00:07:50.935 "seek_hole": false, 00:07:50.935 "seek_data": false, 00:07:50.935 "copy": true, 00:07:50.935 "nvme_iov_md": false 00:07:50.935 }, 00:07:50.935 "memory_domains": [ 00:07:50.935 { 00:07:50.935 "dma_device_id": "system", 00:07:50.935 "dma_device_type": 1 00:07:50.935 }, 00:07:50.935 { 00:07:50.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.935 "dma_device_type": 2 00:07:50.935 } 00:07:50.935 ], 00:07:50.935 "driver_specific": {} 00:07:50.935 }, 00:07:50.935 { 00:07:50.935 "name": "Passthru0", 00:07:50.935 "aliases": [ 00:07:50.935 "9d2b743c-e99e-587a-9306-fa2a1a0d16e8" 00:07:50.935 ], 00:07:50.935 "product_name": "passthru", 00:07:50.935 "block_size": 512, 00:07:50.935 "num_blocks": 16384, 00:07:50.935 "uuid": "9d2b743c-e99e-587a-9306-fa2a1a0d16e8", 00:07:50.935 "assigned_rate_limits": { 00:07:50.935 "rw_ios_per_sec": 0, 00:07:50.935 "rw_mbytes_per_sec": 0, 00:07:50.935 "r_mbytes_per_sec": 0, 00:07:50.935 "w_mbytes_per_sec": 0 00:07:50.935 }, 00:07:50.935 "claimed": false, 00:07:50.935 "zoned": false, 00:07:50.935 "supported_io_types": { 00:07:50.935 "read": true, 00:07:50.935 "write": true, 00:07:50.935 "unmap": true, 00:07:50.935 "flush": true, 00:07:50.935 "reset": true, 00:07:50.935 "nvme_admin": false, 00:07:50.935 "nvme_io": false, 00:07:50.935 "nvme_io_md": false, 00:07:50.935 "write_zeroes": true, 00:07:50.935 "zcopy": true, 00:07:50.935 "get_zone_info": false, 00:07:50.935 "zone_management": false, 00:07:50.935 "zone_append": false, 00:07:50.935 "compare": false, 00:07:50.935 "compare_and_write": false, 00:07:50.935 "abort": true, 00:07:50.935 "seek_hole": false, 00:07:50.935 "seek_data": false, 00:07:50.935 "copy": true, 00:07:50.935 "nvme_iov_md": false 00:07:50.935 }, 00:07:50.935 "memory_domains": [ 00:07:50.935 { 00:07:50.935 "dma_device_id": "system", 00:07:50.935 "dma_device_type": 1 00:07:50.935 }, 00:07:50.935 { 00:07:50.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:50.935 "dma_device_type": 2 00:07:50.935 } 00:07:50.935 ], 00:07:50.935 "driver_specific": { 00:07:50.935 "passthru": { 00:07:50.935 "name": "Passthru0", 00:07:50.935 "base_bdev_name": "Malloc2" 00:07:50.935 } 00:07:50.935 } 00:07:50.935 } 00:07:50.935 ]' 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:07:50.935 00:07:50.935 real 0m0.212s 00:07:50.935 user 0m0.135s 00:07:50.935 sys 0m0.023s 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:50.935 06:18:22 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:07:50.935 ************************************ 00:07:50.935 END TEST rpc_daemon_integrity 00:07:50.935 ************************************ 00:07:50.935 06:18:22 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:07:50.935 06:18:22 rpc -- rpc/rpc.sh@84 -- # killprocess 1965392 00:07:50.935 06:18:22 rpc -- common/autotest_common.sh@952 -- # '[' -z 1965392 ']' 00:07:50.935 06:18:22 rpc -- common/autotest_common.sh@956 -- # kill -0 1965392 00:07:50.935 06:18:22 rpc -- common/autotest_common.sh@957 -- # uname 00:07:50.935 06:18:22 rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:50.935 06:18:22 rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1965392 00:07:50.935 06:18:22 rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:50.935 06:18:22 rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:50.935 06:18:22 rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1965392' 00:07:50.935 killing process with pid 1965392 00:07:50.935 06:18:22 rpc -- common/autotest_common.sh@971 -- # kill 1965392 00:07:50.935 06:18:22 rpc -- common/autotest_common.sh@976 -- # wait 1965392 00:07:51.500 00:07:51.500 real 0m1.937s 00:07:51.500 user 0m2.402s 00:07:51.500 sys 0m0.579s 00:07:51.500 06:18:23 rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:51.500 06:18:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.500 ************************************ 00:07:51.500 END TEST rpc 00:07:51.500 ************************************ 00:07:51.500 06:18:23 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:51.500 06:18:23 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.500 06:18:23 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.500 06:18:23 -- common/autotest_common.sh@10 -- # set +x 00:07:51.500 ************************************ 00:07:51.500 START TEST skip_rpc 00:07:51.501 ************************************ 00:07:51.501 06:18:23 skip_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:07:51.501 * Looking for test storage... 00:07:51.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:07:51.501 06:18:23 skip_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:07:51.501 06:18:23 skip_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:07:51.501 06:18:23 skip_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:07:51.758 06:18:23 skip_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@345 -- # : 1 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:51.758 06:18:23 skip_rpc -- scripts/common.sh@368 -- # return 0 00:07:51.758 06:18:23 skip_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:51.758 06:18:23 skip_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:07:51.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.758 --rc genhtml_branch_coverage=1 00:07:51.758 --rc genhtml_function_coverage=1 00:07:51.758 --rc genhtml_legend=1 00:07:51.758 --rc geninfo_all_blocks=1 00:07:51.758 --rc geninfo_unexecuted_blocks=1 00:07:51.758 00:07:51.758 ' 00:07:51.758 06:18:23 skip_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:07:51.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.758 --rc genhtml_branch_coverage=1 00:07:51.758 --rc genhtml_function_coverage=1 00:07:51.758 --rc genhtml_legend=1 00:07:51.758 --rc geninfo_all_blocks=1 00:07:51.758 --rc geninfo_unexecuted_blocks=1 00:07:51.758 00:07:51.758 ' 00:07:51.758 06:18:23 skip_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:07:51.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.758 --rc genhtml_branch_coverage=1 00:07:51.758 --rc genhtml_function_coverage=1 00:07:51.758 --rc genhtml_legend=1 00:07:51.758 --rc geninfo_all_blocks=1 00:07:51.759 --rc geninfo_unexecuted_blocks=1 00:07:51.759 00:07:51.759 ' 00:07:51.759 06:18:23 skip_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:07:51.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:51.759 --rc genhtml_branch_coverage=1 00:07:51.759 --rc genhtml_function_coverage=1 00:07:51.759 --rc genhtml_legend=1 00:07:51.759 --rc geninfo_all_blocks=1 00:07:51.759 --rc geninfo_unexecuted_blocks=1 00:07:51.759 00:07:51.759 ' 00:07:51.759 06:18:23 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:51.759 06:18:23 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:07:51.759 06:18:23 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:07:51.759 06:18:23 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:51.759 06:18:23 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:51.759 06:18:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:51.759 ************************************ 00:07:51.759 START TEST skip_rpc 00:07:51.759 ************************************ 00:07:51.759 06:18:23 skip_rpc.skip_rpc -- common/autotest_common.sh@1127 -- # test_skip_rpc 00:07:51.759 06:18:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1965732 00:07:51.759 06:18:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:07:51.759 06:18:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:51.759 06:18:23 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:07:51.759 [2024-11-20 06:18:23.443160] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:07:51.759 [2024-11-20 06:18:23.443224] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1965732 ] 00:07:51.759 [2024-11-20 06:18:23.507850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.759 [2024-11-20 06:18:23.567681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1965732 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # '[' -z 1965732 ']' 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # kill -0 1965732 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # uname 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1965732 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1965732' 00:07:57.073 killing process with pid 1965732 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@971 -- # kill 1965732 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@976 -- # wait 1965732 00:07:57.073 00:07:57.073 real 0m5.461s 00:07:57.073 user 0m5.162s 00:07:57.073 sys 0m0.311s 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:07:57.073 06:18:28 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.073 ************************************ 00:07:57.073 END TEST skip_rpc 00:07:57.073 ************************************ 00:07:57.073 06:18:28 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:07:57.073 06:18:28 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:07:57.073 06:18:28 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:07:57.073 06:18:28 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:57.073 ************************************ 00:07:57.073 START TEST skip_rpc_with_json 00:07:57.073 ************************************ 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_json 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1966429 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1966429 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # '[' -z 1966429 ']' 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # local max_retries=100 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:57.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # xtrace_disable 00:07:57.073 06:18:28 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:57.331 [2024-11-20 06:18:28.952776] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:07:57.331 [2024-11-20 06:18:28.952855] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1966429 ] 00:07:57.331 [2024-11-20 06:18:29.018024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.331 [2024-11-20 06:18:29.078142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@866 -- # return 0 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:57.589 [2024-11-20 06:18:29.355553] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:07:57.589 request: 00:07:57.589 { 00:07:57.589 "trtype": "tcp", 00:07:57.589 "method": "nvmf_get_transports", 00:07:57.589 "req_id": 1 00:07:57.589 } 00:07:57.589 Got JSON-RPC error response 00:07:57.589 response: 00:07:57.589 { 00:07:57.589 "code": -19, 00:07:57.589 "message": "No such device" 00:07:57.589 } 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:57.589 [2024-11-20 06:18:29.363695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:57.589 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:07:57.847 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:57.847 06:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:57.847 { 00:07:57.847 "subsystems": [ 00:07:57.847 { 00:07:57.847 "subsystem": "fsdev", 00:07:57.847 "config": [ 00:07:57.847 { 00:07:57.847 "method": "fsdev_set_opts", 00:07:57.847 "params": { 00:07:57.847 "fsdev_io_pool_size": 65535, 00:07:57.847 "fsdev_io_cache_size": 256 00:07:57.847 } 00:07:57.847 } 00:07:57.847 ] 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "subsystem": "vfio_user_target", 00:07:57.847 "config": null 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "subsystem": "keyring", 00:07:57.847 "config": [] 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "subsystem": "iobuf", 00:07:57.847 "config": [ 00:07:57.847 { 00:07:57.847 "method": "iobuf_set_options", 00:07:57.847 "params": { 00:07:57.847 "small_pool_count": 8192, 00:07:57.847 "large_pool_count": 1024, 00:07:57.847 "small_bufsize": 8192, 00:07:57.847 "large_bufsize": 135168, 00:07:57.847 "enable_numa": false 00:07:57.847 } 00:07:57.847 } 00:07:57.847 ] 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "subsystem": "sock", 00:07:57.847 "config": [ 00:07:57.847 { 00:07:57.847 "method": "sock_set_default_impl", 00:07:57.847 "params": { 00:07:57.847 "impl_name": "posix" 00:07:57.847 } 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "method": "sock_impl_set_options", 00:07:57.847 "params": { 00:07:57.847 "impl_name": "ssl", 00:07:57.847 "recv_buf_size": 4096, 00:07:57.847 "send_buf_size": 4096, 00:07:57.847 "enable_recv_pipe": true, 00:07:57.847 "enable_quickack": false, 00:07:57.847 "enable_placement_id": 0, 00:07:57.847 "enable_zerocopy_send_server": true, 00:07:57.847 "enable_zerocopy_send_client": false, 00:07:57.847 "zerocopy_threshold": 0, 00:07:57.847 "tls_version": 0, 00:07:57.847 "enable_ktls": false 00:07:57.847 } 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "method": "sock_impl_set_options", 00:07:57.847 "params": { 00:07:57.847 "impl_name": "posix", 00:07:57.847 "recv_buf_size": 2097152, 00:07:57.847 "send_buf_size": 2097152, 00:07:57.847 "enable_recv_pipe": true, 00:07:57.847 "enable_quickack": false, 00:07:57.847 "enable_placement_id": 0, 00:07:57.847 "enable_zerocopy_send_server": true, 00:07:57.847 "enable_zerocopy_send_client": false, 00:07:57.847 "zerocopy_threshold": 0, 00:07:57.847 "tls_version": 0, 00:07:57.847 "enable_ktls": false 00:07:57.847 } 00:07:57.847 } 00:07:57.847 ] 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "subsystem": "vmd", 00:07:57.847 "config": [] 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "subsystem": "accel", 00:07:57.847 "config": [ 00:07:57.847 { 00:07:57.847 "method": "accel_set_options", 00:07:57.847 "params": { 00:07:57.847 "small_cache_size": 128, 00:07:57.847 "large_cache_size": 16, 00:07:57.847 "task_count": 2048, 00:07:57.847 "sequence_count": 2048, 00:07:57.847 "buf_count": 2048 00:07:57.847 } 00:07:57.847 } 00:07:57.847 ] 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "subsystem": "bdev", 00:07:57.847 "config": [ 00:07:57.847 { 00:07:57.847 "method": "bdev_set_options", 00:07:57.847 "params": { 00:07:57.847 "bdev_io_pool_size": 65535, 00:07:57.847 "bdev_io_cache_size": 256, 00:07:57.847 "bdev_auto_examine": true, 00:07:57.847 "iobuf_small_cache_size": 128, 00:07:57.847 "iobuf_large_cache_size": 16 00:07:57.847 } 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "method": "bdev_raid_set_options", 00:07:57.847 "params": { 00:07:57.847 "process_window_size_kb": 1024, 00:07:57.847 "process_max_bandwidth_mb_sec": 0 00:07:57.847 } 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "method": "bdev_iscsi_set_options", 00:07:57.847 "params": { 00:07:57.847 "timeout_sec": 30 00:07:57.847 } 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "method": "bdev_nvme_set_options", 00:07:57.847 "params": { 00:07:57.847 "action_on_timeout": "none", 00:07:57.847 "timeout_us": 0, 00:07:57.847 "timeout_admin_us": 0, 00:07:57.847 "keep_alive_timeout_ms": 10000, 00:07:57.847 "arbitration_burst": 0, 00:07:57.847 "low_priority_weight": 0, 00:07:57.847 "medium_priority_weight": 0, 00:07:57.847 "high_priority_weight": 0, 00:07:57.847 "nvme_adminq_poll_period_us": 10000, 00:07:57.847 "nvme_ioq_poll_period_us": 0, 00:07:57.847 "io_queue_requests": 0, 00:07:57.847 "delay_cmd_submit": true, 00:07:57.847 "transport_retry_count": 4, 00:07:57.847 "bdev_retry_count": 3, 00:07:57.847 "transport_ack_timeout": 0, 00:07:57.847 "ctrlr_loss_timeout_sec": 0, 00:07:57.847 "reconnect_delay_sec": 0, 00:07:57.847 "fast_io_fail_timeout_sec": 0, 00:07:57.847 "disable_auto_failback": false, 00:07:57.847 "generate_uuids": false, 00:07:57.847 "transport_tos": 0, 00:07:57.847 "nvme_error_stat": false, 00:07:57.847 "rdma_srq_size": 0, 00:07:57.847 "io_path_stat": false, 00:07:57.847 "allow_accel_sequence": false, 00:07:57.847 "rdma_max_cq_size": 0, 00:07:57.847 "rdma_cm_event_timeout_ms": 0, 00:07:57.847 "dhchap_digests": [ 00:07:57.847 "sha256", 00:07:57.847 "sha384", 00:07:57.847 "sha512" 00:07:57.847 ], 00:07:57.847 "dhchap_dhgroups": [ 00:07:57.847 "null", 00:07:57.847 "ffdhe2048", 00:07:57.847 "ffdhe3072", 00:07:57.847 "ffdhe4096", 00:07:57.847 "ffdhe6144", 00:07:57.847 "ffdhe8192" 00:07:57.847 ] 00:07:57.847 } 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "method": "bdev_nvme_set_hotplug", 00:07:57.847 "params": { 00:07:57.847 "period_us": 100000, 00:07:57.847 "enable": false 00:07:57.847 } 00:07:57.847 }, 00:07:57.847 { 00:07:57.847 "method": "bdev_wait_for_examine" 00:07:57.847 } 00:07:57.848 ] 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "subsystem": "scsi", 00:07:57.848 "config": null 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "subsystem": "scheduler", 00:07:57.848 "config": [ 00:07:57.848 { 00:07:57.848 "method": "framework_set_scheduler", 00:07:57.848 "params": { 00:07:57.848 "name": "static" 00:07:57.848 } 00:07:57.848 } 00:07:57.848 ] 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "subsystem": "vhost_scsi", 00:07:57.848 "config": [] 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "subsystem": "vhost_blk", 00:07:57.848 "config": [] 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "subsystem": "ublk", 00:07:57.848 "config": [] 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "subsystem": "nbd", 00:07:57.848 "config": [] 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "subsystem": "nvmf", 00:07:57.848 "config": [ 00:07:57.848 { 00:07:57.848 "method": "nvmf_set_config", 00:07:57.848 "params": { 00:07:57.848 "discovery_filter": "match_any", 00:07:57.848 "admin_cmd_passthru": { 00:07:57.848 "identify_ctrlr": false 00:07:57.848 }, 00:07:57.848 "dhchap_digests": [ 00:07:57.848 "sha256", 00:07:57.848 "sha384", 00:07:57.848 "sha512" 00:07:57.848 ], 00:07:57.848 "dhchap_dhgroups": [ 00:07:57.848 "null", 00:07:57.848 "ffdhe2048", 00:07:57.848 "ffdhe3072", 00:07:57.848 "ffdhe4096", 00:07:57.848 "ffdhe6144", 00:07:57.848 "ffdhe8192" 00:07:57.848 ] 00:07:57.848 } 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "method": "nvmf_set_max_subsystems", 00:07:57.848 "params": { 00:07:57.848 "max_subsystems": 1024 00:07:57.848 } 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "method": "nvmf_set_crdt", 00:07:57.848 "params": { 00:07:57.848 "crdt1": 0, 00:07:57.848 "crdt2": 0, 00:07:57.848 "crdt3": 0 00:07:57.848 } 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "method": "nvmf_create_transport", 00:07:57.848 "params": { 00:07:57.848 "trtype": "TCP", 00:07:57.848 "max_queue_depth": 128, 00:07:57.848 "max_io_qpairs_per_ctrlr": 127, 00:07:57.848 "in_capsule_data_size": 4096, 00:07:57.848 "max_io_size": 131072, 00:07:57.848 "io_unit_size": 131072, 00:07:57.848 "max_aq_depth": 128, 00:07:57.848 "num_shared_buffers": 511, 00:07:57.848 "buf_cache_size": 4294967295, 00:07:57.848 "dif_insert_or_strip": false, 00:07:57.848 "zcopy": false, 00:07:57.848 "c2h_success": true, 00:07:57.848 "sock_priority": 0, 00:07:57.848 "abort_timeout_sec": 1, 00:07:57.848 "ack_timeout": 0, 00:07:57.848 "data_wr_pool_size": 0 00:07:57.848 } 00:07:57.848 } 00:07:57.848 ] 00:07:57.848 }, 00:07:57.848 { 00:07:57.848 "subsystem": "iscsi", 00:07:57.848 "config": [ 00:07:57.848 { 00:07:57.848 "method": "iscsi_set_options", 00:07:57.848 "params": { 00:07:57.848 "node_base": "iqn.2016-06.io.spdk", 00:07:57.848 "max_sessions": 128, 00:07:57.848 "max_connections_per_session": 2, 00:07:57.848 "max_queue_depth": 64, 00:07:57.848 "default_time2wait": 2, 00:07:57.848 "default_time2retain": 20, 00:07:57.848 "first_burst_length": 8192, 00:07:57.848 "immediate_data": true, 00:07:57.848 "allow_duplicated_isid": false, 00:07:57.848 "error_recovery_level": 0, 00:07:57.848 "nop_timeout": 60, 00:07:57.848 "nop_in_interval": 30, 00:07:57.848 "disable_chap": false, 00:07:57.848 "require_chap": false, 00:07:57.848 "mutual_chap": false, 00:07:57.848 "chap_group": 0, 00:07:57.848 "max_large_datain_per_connection": 64, 00:07:57.848 "max_r2t_per_connection": 4, 00:07:57.848 "pdu_pool_size": 36864, 00:07:57.848 "immediate_data_pool_size": 16384, 00:07:57.848 "data_out_pool_size": 2048 00:07:57.848 } 00:07:57.848 } 00:07:57.848 ] 00:07:57.848 } 00:07:57.848 ] 00:07:57.848 } 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1966429 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 1966429 ']' 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 1966429 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1966429 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1966429' 00:07:57.848 killing process with pid 1966429 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 1966429 00:07:57.848 06:18:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 1966429 00:07:58.414 06:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1966569 00:07:58.414 06:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:07:58.414 06:18:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:08:03.674 06:18:34 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1966569 00:08:03.674 06:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # '[' -z 1966569 ']' 00:08:03.674 06:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # kill -0 1966569 00:08:03.674 06:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # uname 00:08:03.674 06:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:03.674 06:18:34 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1966569 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1966569' 00:08:03.674 killing process with pid 1966569 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@971 -- # kill 1966569 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@976 -- # wait 1966569 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:08:03.674 00:08:03.674 real 0m6.556s 00:08:03.674 user 0m6.186s 00:08:03.674 sys 0m0.685s 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:08:03.674 ************************************ 00:08:03.674 END TEST skip_rpc_with_json 00:08:03.674 ************************************ 00:08:03.674 06:18:35 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:08:03.674 06:18:35 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:03.674 06:18:35 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.674 06:18:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.674 ************************************ 00:08:03.674 START TEST skip_rpc_with_delay 00:08:03.674 ************************************ 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1127 -- # test_skip_rpc_with_delay 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:03.674 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.675 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:03.675 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.675 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:03.675 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:03.675 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:03.675 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:03.675 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:08:03.934 [2024-11-20 06:18:35.553742] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:08:03.934 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:08:03.934 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:03.934 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:03.934 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:03.934 00:08:03.934 real 0m0.073s 00:08:03.934 user 0m0.046s 00:08:03.934 sys 0m0.027s 00:08:03.934 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:03.934 06:18:35 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:08:03.934 ************************************ 00:08:03.934 END TEST skip_rpc_with_delay 00:08:03.934 ************************************ 00:08:03.934 06:18:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:08:03.934 06:18:35 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:08:03.934 06:18:35 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:08:03.934 06:18:35 skip_rpc -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:03.934 06:18:35 skip_rpc -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:03.934 06:18:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:03.934 ************************************ 00:08:03.934 START TEST exit_on_failed_rpc_init 00:08:03.934 ************************************ 00:08:03.934 06:18:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1127 -- # test_exit_on_failed_rpc_init 00:08:03.934 06:18:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1967280 00:08:03.934 06:18:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:03.934 06:18:35 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1967280 00:08:03.934 06:18:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # '[' -z 1967280 ']' 00:08:03.934 06:18:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.934 06:18:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:03.934 06:18:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.934 06:18:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:03.934 06:18:35 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:03.934 [2024-11-20 06:18:35.677866] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:03.934 [2024-11-20 06:18:35.677946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967280 ] 00:08:03.934 [2024-11-20 06:18:35.743056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.192 [2024-11-20 06:18:35.804539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@866 -- # return 0 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:08:04.451 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:08:04.451 [2024-11-20 06:18:36.121578] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:04.451 [2024-11-20 06:18:36.121665] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967412 ] 00:08:04.451 [2024-11-20 06:18:36.186033] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:04.451 [2024-11-20 06:18:36.246209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.451 [2024-11-20 06:18:36.246327] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:08:04.451 [2024-11-20 06:18:36.246349] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:08:04.451 [2024-11-20 06:18:36.246361] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1967280 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # '[' -z 1967280 ']' 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # kill -0 1967280 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # uname 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1967280 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1967280' 00:08:04.710 killing process with pid 1967280 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@971 -- # kill 1967280 00:08:04.710 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@976 -- # wait 1967280 00:08:04.969 00:08:04.969 real 0m1.164s 00:08:04.969 user 0m1.273s 00:08:04.969 sys 0m0.436s 00:08:04.969 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:04.969 06:18:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:08:04.969 ************************************ 00:08:04.969 END TEST exit_on_failed_rpc_init 00:08:04.969 ************************************ 00:08:05.228 06:18:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:08:05.228 00:08:05.228 real 0m13.595s 00:08:05.228 user 0m12.851s 00:08:05.228 sys 0m1.638s 00:08:05.228 06:18:36 skip_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:05.228 06:18:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:05.228 ************************************ 00:08:05.228 END TEST skip_rpc 00:08:05.228 ************************************ 00:08:05.228 06:18:36 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:05.228 06:18:36 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:05.228 06:18:36 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:05.228 06:18:36 -- common/autotest_common.sh@10 -- # set +x 00:08:05.228 ************************************ 00:08:05.228 START TEST rpc_client 00:08:05.228 ************************************ 00:08:05.228 06:18:36 rpc_client -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:08:05.228 * Looking for test storage... 00:08:05.228 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:08:05.228 06:18:36 rpc_client -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:05.228 06:18:36 rpc_client -- common/autotest_common.sh@1691 -- # lcov --version 00:08:05.228 06:18:36 rpc_client -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:05.228 06:18:37 rpc_client -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.228 06:18:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:08:05.228 06:18:37 rpc_client -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.228 06:18:37 rpc_client -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:05.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.228 --rc genhtml_branch_coverage=1 00:08:05.228 --rc genhtml_function_coverage=1 00:08:05.228 --rc genhtml_legend=1 00:08:05.228 --rc geninfo_all_blocks=1 00:08:05.228 --rc geninfo_unexecuted_blocks=1 00:08:05.228 00:08:05.228 ' 00:08:05.228 06:18:37 rpc_client -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:05.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.228 --rc genhtml_branch_coverage=1 00:08:05.228 --rc genhtml_function_coverage=1 00:08:05.228 --rc genhtml_legend=1 00:08:05.228 --rc geninfo_all_blocks=1 00:08:05.228 --rc geninfo_unexecuted_blocks=1 00:08:05.228 00:08:05.228 ' 00:08:05.228 06:18:37 rpc_client -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:05.228 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.228 --rc genhtml_branch_coverage=1 00:08:05.228 --rc genhtml_function_coverage=1 00:08:05.228 --rc genhtml_legend=1 00:08:05.229 --rc geninfo_all_blocks=1 00:08:05.229 --rc geninfo_unexecuted_blocks=1 00:08:05.229 00:08:05.229 ' 00:08:05.229 06:18:37 rpc_client -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:05.229 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.229 --rc genhtml_branch_coverage=1 00:08:05.229 --rc genhtml_function_coverage=1 00:08:05.229 --rc genhtml_legend=1 00:08:05.229 --rc geninfo_all_blocks=1 00:08:05.229 --rc geninfo_unexecuted_blocks=1 00:08:05.229 00:08:05.229 ' 00:08:05.229 06:18:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:08:05.229 OK 00:08:05.229 06:18:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:08:05.229 00:08:05.229 real 0m0.160s 00:08:05.229 user 0m0.107s 00:08:05.229 sys 0m0.061s 00:08:05.229 06:18:37 rpc_client -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:05.229 06:18:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:08:05.229 ************************************ 00:08:05.229 END TEST rpc_client 00:08:05.229 ************************************ 00:08:05.229 06:18:37 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:05.229 06:18:37 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:05.229 06:18:37 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:05.229 06:18:37 -- common/autotest_common.sh@10 -- # set +x 00:08:05.488 ************************************ 00:08:05.488 START TEST json_config 00:08:05.488 ************************************ 00:08:05.488 06:18:37 json_config -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:08:05.488 06:18:37 json_config -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:05.488 06:18:37 json_config -- common/autotest_common.sh@1691 -- # lcov --version 00:08:05.488 06:18:37 json_config -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:05.488 06:18:37 json_config -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:05.488 06:18:37 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:05.488 06:18:37 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:05.488 06:18:37 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:05.488 06:18:37 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:08:05.488 06:18:37 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:08:05.488 06:18:37 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:08:05.488 06:18:37 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:08:05.488 06:18:37 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:08:05.488 06:18:37 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:08:05.488 06:18:37 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:08:05.488 06:18:37 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:05.488 06:18:37 json_config -- scripts/common.sh@344 -- # case "$op" in 00:08:05.488 06:18:37 json_config -- scripts/common.sh@345 -- # : 1 00:08:05.488 06:18:37 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:05.488 06:18:37 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:05.489 06:18:37 json_config -- scripts/common.sh@365 -- # decimal 1 00:08:05.489 06:18:37 json_config -- scripts/common.sh@353 -- # local d=1 00:08:05.489 06:18:37 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:05.489 06:18:37 json_config -- scripts/common.sh@355 -- # echo 1 00:08:05.489 06:18:37 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:08:05.489 06:18:37 json_config -- scripts/common.sh@366 -- # decimal 2 00:08:05.489 06:18:37 json_config -- scripts/common.sh@353 -- # local d=2 00:08:05.489 06:18:37 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:05.489 06:18:37 json_config -- scripts/common.sh@355 -- # echo 2 00:08:05.489 06:18:37 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:08:05.489 06:18:37 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:05.489 06:18:37 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:05.489 06:18:37 json_config -- scripts/common.sh@368 -- # return 0 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:05.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.489 --rc genhtml_branch_coverage=1 00:08:05.489 --rc genhtml_function_coverage=1 00:08:05.489 --rc genhtml_legend=1 00:08:05.489 --rc geninfo_all_blocks=1 00:08:05.489 --rc geninfo_unexecuted_blocks=1 00:08:05.489 00:08:05.489 ' 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:05.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.489 --rc genhtml_branch_coverage=1 00:08:05.489 --rc genhtml_function_coverage=1 00:08:05.489 --rc genhtml_legend=1 00:08:05.489 --rc geninfo_all_blocks=1 00:08:05.489 --rc geninfo_unexecuted_blocks=1 00:08:05.489 00:08:05.489 ' 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:05.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.489 --rc genhtml_branch_coverage=1 00:08:05.489 --rc genhtml_function_coverage=1 00:08:05.489 --rc genhtml_legend=1 00:08:05.489 --rc geninfo_all_blocks=1 00:08:05.489 --rc geninfo_unexecuted_blocks=1 00:08:05.489 00:08:05.489 ' 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:05.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:05.489 --rc genhtml_branch_coverage=1 00:08:05.489 --rc genhtml_function_coverage=1 00:08:05.489 --rc genhtml_legend=1 00:08:05.489 --rc geninfo_all_blocks=1 00:08:05.489 --rc geninfo_unexecuted_blocks=1 00:08:05.489 00:08:05.489 ' 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:05.489 06:18:37 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:08:05.489 06:18:37 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:05.489 06:18:37 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:05.489 06:18:37 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:05.489 06:18:37 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.489 06:18:37 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.489 06:18:37 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.489 06:18:37 json_config -- paths/export.sh@5 -- # export PATH 00:08:05.489 06:18:37 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@51 -- # : 0 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:05.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:05.489 06:18:37 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:08:05.489 INFO: JSON configuration test init 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:05.489 06:18:37 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:08:05.489 06:18:37 json_config -- json_config/common.sh@9 -- # local app=target 00:08:05.489 06:18:37 json_config -- json_config/common.sh@10 -- # shift 00:08:05.489 06:18:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:05.489 06:18:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:05.489 06:18:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:05.489 06:18:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:05.489 06:18:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:05.489 06:18:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1967672 00:08:05.489 06:18:37 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:08:05.489 06:18:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:05.489 Waiting for target to run... 00:08:05.489 06:18:37 json_config -- json_config/common.sh@25 -- # waitforlisten 1967672 /var/tmp/spdk_tgt.sock 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@833 -- # '[' -z 1967672 ']' 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:05.489 06:18:37 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:05.490 06:18:37 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:05.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:05.490 06:18:37 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:05.490 06:18:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:05.490 [2024-11-20 06:18:37.269795] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:05.490 [2024-11-20 06:18:37.269865] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1967672 ] 00:08:06.057 [2024-11-20 06:18:37.604014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.057 [2024-11-20 06:18:37.646327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.622 06:18:38 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:06.622 06:18:38 json_config -- common/autotest_common.sh@866 -- # return 0 00:08:06.622 06:18:38 json_config -- json_config/common.sh@26 -- # echo '' 00:08:06.622 00:08:06.622 06:18:38 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:08:06.622 06:18:38 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:08:06.622 06:18:38 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:06.622 06:18:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:06.622 06:18:38 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:08:06.622 06:18:38 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:08:06.622 06:18:38 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:06.622 06:18:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:06.622 06:18:38 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:08:06.622 06:18:38 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:08:06.622 06:18:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:08:09.909 06:18:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:09.909 06:18:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:08:09.909 06:18:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@51 -- # local get_types 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@54 -- # sort 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:08:09.909 06:18:41 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:08:09.909 06:18:41 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:09.909 06:18:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@62 -- # return 0 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:08:10.167 06:18:41 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:10.167 06:18:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:08:10.167 06:18:41 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:10.167 06:18:41 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:08:10.425 MallocForNvmf0 00:08:10.425 06:18:42 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:10.425 06:18:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:08:10.683 MallocForNvmf1 00:08:10.683 06:18:42 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:08:10.683 06:18:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:08:10.942 [2024-11-20 06:18:42.526245] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.942 06:18:42 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:10.942 06:18:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:11.200 06:18:42 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:11.200 06:18:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:08:11.458 06:18:43 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:11.458 06:18:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:08:11.716 06:18:43 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:11.716 06:18:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:08:11.973 [2024-11-20 06:18:43.601636] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:11.973 06:18:43 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:08:11.973 06:18:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:11.973 06:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:11.973 06:18:43 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:08:11.973 06:18:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:11.973 06:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:11.973 06:18:43 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:08:11.973 06:18:43 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:11.973 06:18:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:08:12.231 MallocBdevForConfigChangeCheck 00:08:12.231 06:18:43 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:08:12.231 06:18:43 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:12.231 06:18:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:12.231 06:18:43 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:08:12.231 06:18:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:12.796 06:18:44 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:08:12.796 INFO: shutting down applications... 00:08:12.796 06:18:44 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:08:12.796 06:18:44 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:08:12.796 06:18:44 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:08:12.796 06:18:44 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:08:14.170 Calling clear_iscsi_subsystem 00:08:14.170 Calling clear_nvmf_subsystem 00:08:14.170 Calling clear_nbd_subsystem 00:08:14.170 Calling clear_ublk_subsystem 00:08:14.170 Calling clear_vhost_blk_subsystem 00:08:14.170 Calling clear_vhost_scsi_subsystem 00:08:14.171 Calling clear_bdev_subsystem 00:08:14.171 06:18:45 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:08:14.171 06:18:45 json_config -- json_config/json_config.sh@350 -- # count=100 00:08:14.171 06:18:45 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:08:14.171 06:18:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:14.171 06:18:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:08:14.171 06:18:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:08:14.736 06:18:46 json_config -- json_config/json_config.sh@352 -- # break 00:08:14.737 06:18:46 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:08:14.737 06:18:46 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:08:14.737 06:18:46 json_config -- json_config/common.sh@31 -- # local app=target 00:08:14.737 06:18:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:14.737 06:18:46 json_config -- json_config/common.sh@35 -- # [[ -n 1967672 ]] 00:08:14.737 06:18:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1967672 00:08:14.737 06:18:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:14.737 06:18:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:14.737 06:18:46 json_config -- json_config/common.sh@41 -- # kill -0 1967672 00:08:14.737 06:18:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:08:15.305 06:18:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:08:15.305 06:18:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:15.305 06:18:46 json_config -- json_config/common.sh@41 -- # kill -0 1967672 00:08:15.305 06:18:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:15.305 06:18:46 json_config -- json_config/common.sh@43 -- # break 00:08:15.305 06:18:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:15.305 06:18:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:15.305 SPDK target shutdown done 00:08:15.305 06:18:46 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:08:15.305 INFO: relaunching applications... 00:08:15.305 06:18:46 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:15.305 06:18:46 json_config -- json_config/common.sh@9 -- # local app=target 00:08:15.305 06:18:46 json_config -- json_config/common.sh@10 -- # shift 00:08:15.305 06:18:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:15.305 06:18:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:15.305 06:18:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:08:15.305 06:18:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:15.305 06:18:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:15.305 06:18:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1968875 00:08:15.305 06:18:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:15.305 06:18:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:15.305 Waiting for target to run... 00:08:15.305 06:18:46 json_config -- json_config/common.sh@25 -- # waitforlisten 1968875 /var/tmp/spdk_tgt.sock 00:08:15.305 06:18:46 json_config -- common/autotest_common.sh@833 -- # '[' -z 1968875 ']' 00:08:15.305 06:18:46 json_config -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:15.305 06:18:46 json_config -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:15.305 06:18:46 json_config -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:15.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:15.305 06:18:46 json_config -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:15.305 06:18:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:15.305 [2024-11-20 06:18:46.959803] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:15.305 [2024-11-20 06:18:46.959929] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1968875 ] 00:08:15.873 [2024-11-20 06:18:47.529179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:15.873 [2024-11-20 06:18:47.581883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:19.181 [2024-11-20 06:18:50.635943] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:19.181 [2024-11-20 06:18:50.668432] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:08:19.747 06:18:51 json_config -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:19.747 06:18:51 json_config -- common/autotest_common.sh@866 -- # return 0 00:08:19.747 06:18:51 json_config -- json_config/common.sh@26 -- # echo '' 00:08:19.747 00:08:19.747 06:18:51 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:08:19.747 06:18:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:08:19.747 INFO: Checking if target configuration is the same... 00:08:19.747 06:18:51 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:19.747 06:18:51 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:08:19.747 06:18:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:19.747 + '[' 2 -ne 2 ']' 00:08:19.747 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:19.747 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:19.747 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:19.747 +++ basename /dev/fd/62 00:08:19.747 ++ mktemp /tmp/62.XXX 00:08:19.747 + tmp_file_1=/tmp/62.V0O 00:08:19.747 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:19.747 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:19.747 + tmp_file_2=/tmp/spdk_tgt_config.json.FgO 00:08:19.747 + ret=0 00:08:19.747 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:20.312 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:20.312 + diff -u /tmp/62.V0O /tmp/spdk_tgt_config.json.FgO 00:08:20.312 + echo 'INFO: JSON config files are the same' 00:08:20.312 INFO: JSON config files are the same 00:08:20.312 + rm /tmp/62.V0O /tmp/spdk_tgt_config.json.FgO 00:08:20.312 + exit 0 00:08:20.312 06:18:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:08:20.312 06:18:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:08:20.313 INFO: changing configuration and checking if this can be detected... 00:08:20.313 06:18:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:20.313 06:18:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:08:20.570 06:18:52 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:20.570 06:18:52 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:08:20.570 06:18:52 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:08:20.570 + '[' 2 -ne 2 ']' 00:08:20.570 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:08:20.570 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:08:20.570 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:08:20.570 +++ basename /dev/fd/62 00:08:20.570 ++ mktemp /tmp/62.XXX 00:08:20.570 + tmp_file_1=/tmp/62.uaZ 00:08:20.570 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:20.570 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:08:20.570 + tmp_file_2=/tmp/spdk_tgt_config.json.rwU 00:08:20.570 + ret=0 00:08:20.570 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:20.828 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:08:20.828 + diff -u /tmp/62.uaZ /tmp/spdk_tgt_config.json.rwU 00:08:20.828 + ret=1 00:08:20.828 + echo '=== Start of file: /tmp/62.uaZ ===' 00:08:20.828 + cat /tmp/62.uaZ 00:08:20.828 + echo '=== End of file: /tmp/62.uaZ ===' 00:08:20.828 + echo '' 00:08:20.828 + echo '=== Start of file: /tmp/spdk_tgt_config.json.rwU ===' 00:08:20.828 + cat /tmp/spdk_tgt_config.json.rwU 00:08:20.828 + echo '=== End of file: /tmp/spdk_tgt_config.json.rwU ===' 00:08:20.828 + echo '' 00:08:20.828 + rm /tmp/62.uaZ /tmp/spdk_tgt_config.json.rwU 00:08:20.828 + exit 1 00:08:20.828 06:18:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:08:20.828 INFO: configuration change detected. 00:08:20.828 06:18:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:08:20.828 06:18:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:08:20.828 06:18:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.828 06:18:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:20.828 06:18:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:08:20.828 06:18:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:08:20.828 06:18:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 1968875 ]] 00:08:20.828 06:18:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:08:20.828 06:18:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:08:20.828 06:18:52 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:20.828 06:18:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:21.086 06:18:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:08:21.086 06:18:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:08:21.086 06:18:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:08:21.086 06:18:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:08:21.086 06:18:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:08:21.086 06:18:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:21.086 06:18:52 json_config -- json_config/json_config.sh@330 -- # killprocess 1968875 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@952 -- # '[' -z 1968875 ']' 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@956 -- # kill -0 1968875 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@957 -- # uname 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1968875 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1968875' 00:08:21.086 killing process with pid 1968875 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@971 -- # kill 1968875 00:08:21.086 06:18:52 json_config -- common/autotest_common.sh@976 -- # wait 1968875 00:08:22.462 06:18:54 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:08:22.462 06:18:54 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:08:22.462 06:18:54 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:22.462 06:18:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:22.722 06:18:54 json_config -- json_config/json_config.sh@335 -- # return 0 00:08:22.722 06:18:54 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:08:22.722 INFO: Success 00:08:22.722 00:08:22.722 real 0m17.247s 00:08:22.722 user 0m19.034s 00:08:22.722 sys 0m2.665s 00:08:22.722 06:18:54 json_config -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:22.722 06:18:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:08:22.722 ************************************ 00:08:22.722 END TEST json_config 00:08:22.722 ************************************ 00:08:22.722 06:18:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:22.722 06:18:54 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:22.722 06:18:54 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:22.722 06:18:54 -- common/autotest_common.sh@10 -- # set +x 00:08:22.722 ************************************ 00:08:22.722 START TEST json_config_extra_key 00:08:22.722 ************************************ 00:08:22.722 06:18:54 json_config_extra_key -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:08:22.722 06:18:54 json_config_extra_key -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:22.722 06:18:54 json_config_extra_key -- common/autotest_common.sh@1691 -- # lcov --version 00:08:22.722 06:18:54 json_config_extra_key -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:22.722 06:18:54 json_config_extra_key -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.722 06:18:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:08:22.722 06:18:54 json_config_extra_key -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.722 06:18:54 json_config_extra_key -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:22.722 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.722 --rc genhtml_branch_coverage=1 00:08:22.722 --rc genhtml_function_coverage=1 00:08:22.722 --rc genhtml_legend=1 00:08:22.722 --rc geninfo_all_blocks=1 00:08:22.722 --rc geninfo_unexecuted_blocks=1 00:08:22.722 00:08:22.722 ' 00:08:22.722 06:18:54 json_config_extra_key -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:22.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.723 --rc genhtml_branch_coverage=1 00:08:22.723 --rc genhtml_function_coverage=1 00:08:22.723 --rc genhtml_legend=1 00:08:22.723 --rc geninfo_all_blocks=1 00:08:22.723 --rc geninfo_unexecuted_blocks=1 00:08:22.723 00:08:22.723 ' 00:08:22.723 06:18:54 json_config_extra_key -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:22.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.723 --rc genhtml_branch_coverage=1 00:08:22.723 --rc genhtml_function_coverage=1 00:08:22.723 --rc genhtml_legend=1 00:08:22.723 --rc geninfo_all_blocks=1 00:08:22.723 --rc geninfo_unexecuted_blocks=1 00:08:22.723 00:08:22.723 ' 00:08:22.723 06:18:54 json_config_extra_key -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:22.723 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.723 --rc genhtml_branch_coverage=1 00:08:22.723 --rc genhtml_function_coverage=1 00:08:22.723 --rc genhtml_legend=1 00:08:22.723 --rc geninfo_all_blocks=1 00:08:22.723 --rc geninfo_unexecuted_blocks=1 00:08:22.723 00:08:22.723 ' 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.723 06:18:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.723 06:18:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.723 06:18:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.723 06:18:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.723 06:18:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.723 06:18:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.723 06:18:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.723 06:18:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:08:22.723 06:18:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:22.723 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:22.723 06:18:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:08:22.723 INFO: launching applications... 00:08:22.723 06:18:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1969924 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:08:22.723 Waiting for target to run... 00:08:22.723 06:18:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1969924 /var/tmp/spdk_tgt.sock 00:08:22.723 06:18:54 json_config_extra_key -- common/autotest_common.sh@833 -- # '[' -z 1969924 ']' 00:08:22.723 06:18:54 json_config_extra_key -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:08:22.723 06:18:54 json_config_extra_key -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:22.723 06:18:54 json_config_extra_key -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:08:22.723 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:08:22.723 06:18:54 json_config_extra_key -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:22.723 06:18:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:22.984 [2024-11-20 06:18:54.561729] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:22.984 [2024-11-20 06:18:54.561803] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1969924 ] 00:08:23.243 [2024-11-20 06:18:54.912828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.243 [2024-11-20 06:18:54.957522] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.808 06:18:55 json_config_extra_key -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:23.809 06:18:55 json_config_extra_key -- common/autotest_common.sh@866 -- # return 0 00:08:23.809 06:18:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:08:23.809 00:08:23.809 06:18:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:08:23.809 INFO: shutting down applications... 00:08:23.809 06:18:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:08:23.809 06:18:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:08:23.809 06:18:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:08:23.809 06:18:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1969924 ]] 00:08:23.809 06:18:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1969924 00:08:23.809 06:18:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:08:23.809 06:18:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:23.809 06:18:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1969924 00:08:23.809 06:18:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:08:24.374 06:18:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:08:24.374 06:18:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:08:24.374 06:18:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1969924 00:08:24.374 06:18:56 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:08:24.374 06:18:56 json_config_extra_key -- json_config/common.sh@43 -- # break 00:08:24.374 06:18:56 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:08:24.374 06:18:56 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:08:24.374 SPDK target shutdown done 00:08:24.374 06:18:56 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:08:24.374 Success 00:08:24.374 00:08:24.374 real 0m1.665s 00:08:24.374 user 0m1.637s 00:08:24.374 sys 0m0.476s 00:08:24.374 06:18:56 json_config_extra_key -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:24.374 06:18:56 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:08:24.374 ************************************ 00:08:24.374 END TEST json_config_extra_key 00:08:24.374 ************************************ 00:08:24.374 06:18:56 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:24.374 06:18:56 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:24.374 06:18:56 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:24.374 06:18:56 -- common/autotest_common.sh@10 -- # set +x 00:08:24.374 ************************************ 00:08:24.374 START TEST alias_rpc 00:08:24.374 ************************************ 00:08:24.374 06:18:56 alias_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:08:24.374 * Looking for test storage... 00:08:24.374 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:08:24.374 06:18:56 alias_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:24.374 06:18:56 alias_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:08:24.374 06:18:56 alias_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@345 -- # : 1 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.633 06:18:56 alias_rpc -- scripts/common.sh@368 -- # return 0 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.633 --rc genhtml_branch_coverage=1 00:08:24.633 --rc genhtml_function_coverage=1 00:08:24.633 --rc genhtml_legend=1 00:08:24.633 --rc geninfo_all_blocks=1 00:08:24.633 --rc geninfo_unexecuted_blocks=1 00:08:24.633 00:08:24.633 ' 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.633 --rc genhtml_branch_coverage=1 00:08:24.633 --rc genhtml_function_coverage=1 00:08:24.633 --rc genhtml_legend=1 00:08:24.633 --rc geninfo_all_blocks=1 00:08:24.633 --rc geninfo_unexecuted_blocks=1 00:08:24.633 00:08:24.633 ' 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.633 --rc genhtml_branch_coverage=1 00:08:24.633 --rc genhtml_function_coverage=1 00:08:24.633 --rc genhtml_legend=1 00:08:24.633 --rc geninfo_all_blocks=1 00:08:24.633 --rc geninfo_unexecuted_blocks=1 00:08:24.633 00:08:24.633 ' 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:24.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.633 --rc genhtml_branch_coverage=1 00:08:24.633 --rc genhtml_function_coverage=1 00:08:24.633 --rc genhtml_legend=1 00:08:24.633 --rc geninfo_all_blocks=1 00:08:24.633 --rc geninfo_unexecuted_blocks=1 00:08:24.633 00:08:24.633 ' 00:08:24.633 06:18:56 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:08:24.633 06:18:56 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1970197 00:08:24.633 06:18:56 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:24.633 06:18:56 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1970197 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@833 -- # '[' -z 1970197 ']' 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:24.633 06:18:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:24.633 [2024-11-20 06:18:56.280522] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:24.633 [2024-11-20 06:18:56.280610] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970197 ] 00:08:24.633 [2024-11-20 06:18:56.344645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.633 [2024-11-20 06:18:56.401786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.891 06:18:56 alias_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:24.891 06:18:56 alias_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:24.891 06:18:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:08:25.150 06:18:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1970197 00:08:25.150 06:18:56 alias_rpc -- common/autotest_common.sh@952 -- # '[' -z 1970197 ']' 00:08:25.150 06:18:56 alias_rpc -- common/autotest_common.sh@956 -- # kill -0 1970197 00:08:25.150 06:18:56 alias_rpc -- common/autotest_common.sh@957 -- # uname 00:08:25.150 06:18:56 alias_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:25.150 06:18:56 alias_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1970197 00:08:25.407 06:18:56 alias_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:25.407 06:18:56 alias_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:25.407 06:18:56 alias_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1970197' 00:08:25.407 killing process with pid 1970197 00:08:25.407 06:18:56 alias_rpc -- common/autotest_common.sh@971 -- # kill 1970197 00:08:25.407 06:18:56 alias_rpc -- common/autotest_common.sh@976 -- # wait 1970197 00:08:25.665 00:08:25.665 real 0m1.329s 00:08:25.665 user 0m1.473s 00:08:25.665 sys 0m0.413s 00:08:25.665 06:18:57 alias_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:25.665 06:18:57 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:25.665 ************************************ 00:08:25.665 END TEST alias_rpc 00:08:25.665 ************************************ 00:08:25.665 06:18:57 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:08:25.666 06:18:57 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:25.666 06:18:57 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:25.666 06:18:57 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:25.666 06:18:57 -- common/autotest_common.sh@10 -- # set +x 00:08:25.666 ************************************ 00:08:25.666 START TEST spdkcli_tcp 00:08:25.666 ************************************ 00:08:25.666 06:18:57 spdkcli_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:08:25.925 * Looking for test storage... 00:08:25.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:25.925 06:18:57 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:25.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.925 --rc genhtml_branch_coverage=1 00:08:25.925 --rc genhtml_function_coverage=1 00:08:25.925 --rc genhtml_legend=1 00:08:25.925 --rc geninfo_all_blocks=1 00:08:25.925 --rc geninfo_unexecuted_blocks=1 00:08:25.925 00:08:25.925 ' 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:25.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.925 --rc genhtml_branch_coverage=1 00:08:25.925 --rc genhtml_function_coverage=1 00:08:25.925 --rc genhtml_legend=1 00:08:25.925 --rc geninfo_all_blocks=1 00:08:25.925 --rc geninfo_unexecuted_blocks=1 00:08:25.925 00:08:25.925 ' 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:25.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.925 --rc genhtml_branch_coverage=1 00:08:25.925 --rc genhtml_function_coverage=1 00:08:25.925 --rc genhtml_legend=1 00:08:25.925 --rc geninfo_all_blocks=1 00:08:25.925 --rc geninfo_unexecuted_blocks=1 00:08:25.925 00:08:25.925 ' 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:25.925 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:25.925 --rc genhtml_branch_coverage=1 00:08:25.925 --rc genhtml_function_coverage=1 00:08:25.925 --rc genhtml_legend=1 00:08:25.925 --rc geninfo_all_blocks=1 00:08:25.925 --rc geninfo_unexecuted_blocks=1 00:08:25.925 00:08:25.925 ' 00:08:25.925 06:18:57 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:08:25.925 06:18:57 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:08:25.925 06:18:57 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:08:25.925 06:18:57 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:08:25.925 06:18:57 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:08:25.925 06:18:57 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:08:25.925 06:18:57 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.925 06:18:57 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1970439 00:08:25.925 06:18:57 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:08:25.925 06:18:57 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1970439 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@833 -- # '[' -z 1970439 ']' 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:25.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:25.925 06:18:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:25.925 [2024-11-20 06:18:57.663460] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:25.925 [2024-11-20 06:18:57.663534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970439 ] 00:08:25.925 [2024-11-20 06:18:57.725958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:26.184 [2024-11-20 06:18:57.784841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.184 [2024-11-20 06:18:57.784846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.442 06:18:58 spdkcli_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:26.442 06:18:58 spdkcli_tcp -- common/autotest_common.sh@866 -- # return 0 00:08:26.442 06:18:58 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1970450 00:08:26.442 06:18:58 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:08:26.442 06:18:58 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:08:26.701 [ 00:08:26.701 "bdev_malloc_delete", 00:08:26.701 "bdev_malloc_create", 00:08:26.701 "bdev_null_resize", 00:08:26.701 "bdev_null_delete", 00:08:26.701 "bdev_null_create", 00:08:26.701 "bdev_nvme_cuse_unregister", 00:08:26.701 "bdev_nvme_cuse_register", 00:08:26.701 "bdev_opal_new_user", 00:08:26.701 "bdev_opal_set_lock_state", 00:08:26.701 "bdev_opal_delete", 00:08:26.701 "bdev_opal_get_info", 00:08:26.701 "bdev_opal_create", 00:08:26.701 "bdev_nvme_opal_revert", 00:08:26.701 "bdev_nvme_opal_init", 00:08:26.701 "bdev_nvme_send_cmd", 00:08:26.701 "bdev_nvme_set_keys", 00:08:26.701 "bdev_nvme_get_path_iostat", 00:08:26.701 "bdev_nvme_get_mdns_discovery_info", 00:08:26.701 "bdev_nvme_stop_mdns_discovery", 00:08:26.701 "bdev_nvme_start_mdns_discovery", 00:08:26.701 "bdev_nvme_set_multipath_policy", 00:08:26.701 "bdev_nvme_set_preferred_path", 00:08:26.701 "bdev_nvme_get_io_paths", 00:08:26.701 "bdev_nvme_remove_error_injection", 00:08:26.701 "bdev_nvme_add_error_injection", 00:08:26.701 "bdev_nvme_get_discovery_info", 00:08:26.701 "bdev_nvme_stop_discovery", 00:08:26.701 "bdev_nvme_start_discovery", 00:08:26.701 "bdev_nvme_get_controller_health_info", 00:08:26.701 "bdev_nvme_disable_controller", 00:08:26.701 "bdev_nvme_enable_controller", 00:08:26.701 "bdev_nvme_reset_controller", 00:08:26.701 "bdev_nvme_get_transport_statistics", 00:08:26.701 "bdev_nvme_apply_firmware", 00:08:26.701 "bdev_nvme_detach_controller", 00:08:26.701 "bdev_nvme_get_controllers", 00:08:26.701 "bdev_nvme_attach_controller", 00:08:26.701 "bdev_nvme_set_hotplug", 00:08:26.701 "bdev_nvme_set_options", 00:08:26.701 "bdev_passthru_delete", 00:08:26.701 "bdev_passthru_create", 00:08:26.701 "bdev_lvol_set_parent_bdev", 00:08:26.701 "bdev_lvol_set_parent", 00:08:26.701 "bdev_lvol_check_shallow_copy", 00:08:26.701 "bdev_lvol_start_shallow_copy", 00:08:26.701 "bdev_lvol_grow_lvstore", 00:08:26.701 "bdev_lvol_get_lvols", 00:08:26.701 "bdev_lvol_get_lvstores", 00:08:26.701 "bdev_lvol_delete", 00:08:26.701 "bdev_lvol_set_read_only", 00:08:26.701 "bdev_lvol_resize", 00:08:26.701 "bdev_lvol_decouple_parent", 00:08:26.701 "bdev_lvol_inflate", 00:08:26.701 "bdev_lvol_rename", 00:08:26.701 "bdev_lvol_clone_bdev", 00:08:26.701 "bdev_lvol_clone", 00:08:26.701 "bdev_lvol_snapshot", 00:08:26.701 "bdev_lvol_create", 00:08:26.701 "bdev_lvol_delete_lvstore", 00:08:26.701 "bdev_lvol_rename_lvstore", 00:08:26.701 "bdev_lvol_create_lvstore", 00:08:26.701 "bdev_raid_set_options", 00:08:26.701 "bdev_raid_remove_base_bdev", 00:08:26.701 "bdev_raid_add_base_bdev", 00:08:26.701 "bdev_raid_delete", 00:08:26.701 "bdev_raid_create", 00:08:26.701 "bdev_raid_get_bdevs", 00:08:26.701 "bdev_error_inject_error", 00:08:26.701 "bdev_error_delete", 00:08:26.701 "bdev_error_create", 00:08:26.701 "bdev_split_delete", 00:08:26.701 "bdev_split_create", 00:08:26.701 "bdev_delay_delete", 00:08:26.701 "bdev_delay_create", 00:08:26.701 "bdev_delay_update_latency", 00:08:26.701 "bdev_zone_block_delete", 00:08:26.701 "bdev_zone_block_create", 00:08:26.701 "blobfs_create", 00:08:26.701 "blobfs_detect", 00:08:26.701 "blobfs_set_cache_size", 00:08:26.701 "bdev_aio_delete", 00:08:26.701 "bdev_aio_rescan", 00:08:26.701 "bdev_aio_create", 00:08:26.701 "bdev_ftl_set_property", 00:08:26.701 "bdev_ftl_get_properties", 00:08:26.701 "bdev_ftl_get_stats", 00:08:26.701 "bdev_ftl_unmap", 00:08:26.701 "bdev_ftl_unload", 00:08:26.701 "bdev_ftl_delete", 00:08:26.701 "bdev_ftl_load", 00:08:26.701 "bdev_ftl_create", 00:08:26.701 "bdev_virtio_attach_controller", 00:08:26.701 "bdev_virtio_scsi_get_devices", 00:08:26.701 "bdev_virtio_detach_controller", 00:08:26.701 "bdev_virtio_blk_set_hotplug", 00:08:26.701 "bdev_iscsi_delete", 00:08:26.701 "bdev_iscsi_create", 00:08:26.701 "bdev_iscsi_set_options", 00:08:26.701 "accel_error_inject_error", 00:08:26.701 "ioat_scan_accel_module", 00:08:26.701 "dsa_scan_accel_module", 00:08:26.701 "iaa_scan_accel_module", 00:08:26.701 "vfu_virtio_create_fs_endpoint", 00:08:26.701 "vfu_virtio_create_scsi_endpoint", 00:08:26.701 "vfu_virtio_scsi_remove_target", 00:08:26.701 "vfu_virtio_scsi_add_target", 00:08:26.701 "vfu_virtio_create_blk_endpoint", 00:08:26.701 "vfu_virtio_delete_endpoint", 00:08:26.701 "keyring_file_remove_key", 00:08:26.701 "keyring_file_add_key", 00:08:26.701 "keyring_linux_set_options", 00:08:26.701 "fsdev_aio_delete", 00:08:26.701 "fsdev_aio_create", 00:08:26.701 "iscsi_get_histogram", 00:08:26.701 "iscsi_enable_histogram", 00:08:26.701 "iscsi_set_options", 00:08:26.701 "iscsi_get_auth_groups", 00:08:26.701 "iscsi_auth_group_remove_secret", 00:08:26.701 "iscsi_auth_group_add_secret", 00:08:26.701 "iscsi_delete_auth_group", 00:08:26.701 "iscsi_create_auth_group", 00:08:26.701 "iscsi_set_discovery_auth", 00:08:26.701 "iscsi_get_options", 00:08:26.701 "iscsi_target_node_request_logout", 00:08:26.701 "iscsi_target_node_set_redirect", 00:08:26.701 "iscsi_target_node_set_auth", 00:08:26.701 "iscsi_target_node_add_lun", 00:08:26.701 "iscsi_get_stats", 00:08:26.701 "iscsi_get_connections", 00:08:26.701 "iscsi_portal_group_set_auth", 00:08:26.701 "iscsi_start_portal_group", 00:08:26.701 "iscsi_delete_portal_group", 00:08:26.701 "iscsi_create_portal_group", 00:08:26.701 "iscsi_get_portal_groups", 00:08:26.701 "iscsi_delete_target_node", 00:08:26.701 "iscsi_target_node_remove_pg_ig_maps", 00:08:26.701 "iscsi_target_node_add_pg_ig_maps", 00:08:26.701 "iscsi_create_target_node", 00:08:26.701 "iscsi_get_target_nodes", 00:08:26.701 "iscsi_delete_initiator_group", 00:08:26.701 "iscsi_initiator_group_remove_initiators", 00:08:26.701 "iscsi_initiator_group_add_initiators", 00:08:26.701 "iscsi_create_initiator_group", 00:08:26.701 "iscsi_get_initiator_groups", 00:08:26.701 "nvmf_set_crdt", 00:08:26.701 "nvmf_set_config", 00:08:26.701 "nvmf_set_max_subsystems", 00:08:26.701 "nvmf_stop_mdns_prr", 00:08:26.701 "nvmf_publish_mdns_prr", 00:08:26.701 "nvmf_subsystem_get_listeners", 00:08:26.701 "nvmf_subsystem_get_qpairs", 00:08:26.701 "nvmf_subsystem_get_controllers", 00:08:26.701 "nvmf_get_stats", 00:08:26.701 "nvmf_get_transports", 00:08:26.701 "nvmf_create_transport", 00:08:26.701 "nvmf_get_targets", 00:08:26.701 "nvmf_delete_target", 00:08:26.701 "nvmf_create_target", 00:08:26.701 "nvmf_subsystem_allow_any_host", 00:08:26.701 "nvmf_subsystem_set_keys", 00:08:26.701 "nvmf_subsystem_remove_host", 00:08:26.701 "nvmf_subsystem_add_host", 00:08:26.701 "nvmf_ns_remove_host", 00:08:26.701 "nvmf_ns_add_host", 00:08:26.701 "nvmf_subsystem_remove_ns", 00:08:26.701 "nvmf_subsystem_set_ns_ana_group", 00:08:26.701 "nvmf_subsystem_add_ns", 00:08:26.701 "nvmf_subsystem_listener_set_ana_state", 00:08:26.701 "nvmf_discovery_get_referrals", 00:08:26.701 "nvmf_discovery_remove_referral", 00:08:26.701 "nvmf_discovery_add_referral", 00:08:26.701 "nvmf_subsystem_remove_listener", 00:08:26.701 "nvmf_subsystem_add_listener", 00:08:26.701 "nvmf_delete_subsystem", 00:08:26.701 "nvmf_create_subsystem", 00:08:26.701 "nvmf_get_subsystems", 00:08:26.701 "env_dpdk_get_mem_stats", 00:08:26.701 "nbd_get_disks", 00:08:26.701 "nbd_stop_disk", 00:08:26.701 "nbd_start_disk", 00:08:26.701 "ublk_recover_disk", 00:08:26.701 "ublk_get_disks", 00:08:26.701 "ublk_stop_disk", 00:08:26.701 "ublk_start_disk", 00:08:26.701 "ublk_destroy_target", 00:08:26.701 "ublk_create_target", 00:08:26.701 "virtio_blk_create_transport", 00:08:26.701 "virtio_blk_get_transports", 00:08:26.701 "vhost_controller_set_coalescing", 00:08:26.701 "vhost_get_controllers", 00:08:26.701 "vhost_delete_controller", 00:08:26.701 "vhost_create_blk_controller", 00:08:26.701 "vhost_scsi_controller_remove_target", 00:08:26.701 "vhost_scsi_controller_add_target", 00:08:26.701 "vhost_start_scsi_controller", 00:08:26.701 "vhost_create_scsi_controller", 00:08:26.702 "thread_set_cpumask", 00:08:26.702 "scheduler_set_options", 00:08:26.702 "framework_get_governor", 00:08:26.702 "framework_get_scheduler", 00:08:26.702 "framework_set_scheduler", 00:08:26.702 "framework_get_reactors", 00:08:26.702 "thread_get_io_channels", 00:08:26.702 "thread_get_pollers", 00:08:26.702 "thread_get_stats", 00:08:26.702 "framework_monitor_context_switch", 00:08:26.702 "spdk_kill_instance", 00:08:26.702 "log_enable_timestamps", 00:08:26.702 "log_get_flags", 00:08:26.702 "log_clear_flag", 00:08:26.702 "log_set_flag", 00:08:26.702 "log_get_level", 00:08:26.702 "log_set_level", 00:08:26.702 "log_get_print_level", 00:08:26.702 "log_set_print_level", 00:08:26.702 "framework_enable_cpumask_locks", 00:08:26.702 "framework_disable_cpumask_locks", 00:08:26.702 "framework_wait_init", 00:08:26.702 "framework_start_init", 00:08:26.702 "scsi_get_devices", 00:08:26.702 "bdev_get_histogram", 00:08:26.702 "bdev_enable_histogram", 00:08:26.702 "bdev_set_qos_limit", 00:08:26.702 "bdev_set_qd_sampling_period", 00:08:26.702 "bdev_get_bdevs", 00:08:26.702 "bdev_reset_iostat", 00:08:26.702 "bdev_get_iostat", 00:08:26.702 "bdev_examine", 00:08:26.702 "bdev_wait_for_examine", 00:08:26.702 "bdev_set_options", 00:08:26.702 "accel_get_stats", 00:08:26.702 "accel_set_options", 00:08:26.702 "accel_set_driver", 00:08:26.702 "accel_crypto_key_destroy", 00:08:26.702 "accel_crypto_keys_get", 00:08:26.702 "accel_crypto_key_create", 00:08:26.702 "accel_assign_opc", 00:08:26.702 "accel_get_module_info", 00:08:26.702 "accel_get_opc_assignments", 00:08:26.702 "vmd_rescan", 00:08:26.702 "vmd_remove_device", 00:08:26.702 "vmd_enable", 00:08:26.702 "sock_get_default_impl", 00:08:26.702 "sock_set_default_impl", 00:08:26.702 "sock_impl_set_options", 00:08:26.702 "sock_impl_get_options", 00:08:26.702 "iobuf_get_stats", 00:08:26.702 "iobuf_set_options", 00:08:26.702 "keyring_get_keys", 00:08:26.702 "vfu_tgt_set_base_path", 00:08:26.702 "framework_get_pci_devices", 00:08:26.702 "framework_get_config", 00:08:26.702 "framework_get_subsystems", 00:08:26.702 "fsdev_set_opts", 00:08:26.702 "fsdev_get_opts", 00:08:26.702 "trace_get_info", 00:08:26.702 "trace_get_tpoint_group_mask", 00:08:26.702 "trace_disable_tpoint_group", 00:08:26.702 "trace_enable_tpoint_group", 00:08:26.702 "trace_clear_tpoint_mask", 00:08:26.702 "trace_set_tpoint_mask", 00:08:26.702 "notify_get_notifications", 00:08:26.702 "notify_get_types", 00:08:26.702 "spdk_get_version", 00:08:26.702 "rpc_get_methods" 00:08:26.702 ] 00:08:26.702 06:18:58 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:26.702 06:18:58 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:08:26.702 06:18:58 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1970439 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@952 -- # '[' -z 1970439 ']' 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@956 -- # kill -0 1970439 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@957 -- # uname 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1970439 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1970439' 00:08:26.702 killing process with pid 1970439 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@971 -- # kill 1970439 00:08:26.702 06:18:58 spdkcli_tcp -- common/autotest_common.sh@976 -- # wait 1970439 00:08:26.959 00:08:26.959 real 0m1.330s 00:08:26.959 user 0m2.361s 00:08:26.959 sys 0m0.464s 00:08:26.959 06:18:58 spdkcli_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:26.959 06:18:58 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:26.959 ************************************ 00:08:26.959 END TEST spdkcli_tcp 00:08:26.959 ************************************ 00:08:27.218 06:18:58 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:27.218 06:18:58 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:27.218 06:18:58 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:27.218 06:18:58 -- common/autotest_common.sh@10 -- # set +x 00:08:27.218 ************************************ 00:08:27.218 START TEST dpdk_mem_utility 00:08:27.218 ************************************ 00:08:27.218 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:08:27.218 * Looking for test storage... 00:08:27.218 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:08:27.218 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:27.218 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lcov --version 00:08:27.218 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:27.218 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:27.218 06:18:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:08:27.218 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:27.218 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:27.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.218 --rc genhtml_branch_coverage=1 00:08:27.218 --rc genhtml_function_coverage=1 00:08:27.218 --rc genhtml_legend=1 00:08:27.218 --rc geninfo_all_blocks=1 00:08:27.218 --rc geninfo_unexecuted_blocks=1 00:08:27.218 00:08:27.218 ' 00:08:27.218 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:27.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.218 --rc genhtml_branch_coverage=1 00:08:27.218 --rc genhtml_function_coverage=1 00:08:27.218 --rc genhtml_legend=1 00:08:27.218 --rc geninfo_all_blocks=1 00:08:27.218 --rc geninfo_unexecuted_blocks=1 00:08:27.218 00:08:27.218 ' 00:08:27.218 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:27.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.218 --rc genhtml_branch_coverage=1 00:08:27.218 --rc genhtml_function_coverage=1 00:08:27.218 --rc genhtml_legend=1 00:08:27.218 --rc geninfo_all_blocks=1 00:08:27.218 --rc geninfo_unexecuted_blocks=1 00:08:27.218 00:08:27.218 ' 00:08:27.218 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:27.218 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:27.218 --rc genhtml_branch_coverage=1 00:08:27.218 --rc genhtml_function_coverage=1 00:08:27.218 --rc genhtml_legend=1 00:08:27.218 --rc geninfo_all_blocks=1 00:08:27.218 --rc geninfo_unexecuted_blocks=1 00:08:27.219 00:08:27.219 ' 00:08:27.219 06:18:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:27.219 06:18:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1970654 00:08:27.219 06:18:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:08:27.219 06:18:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1970654 00:08:27.219 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@833 -- # '[' -z 1970654 ']' 00:08:27.219 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:27.219 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:27.219 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:27.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:27.219 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:27.219 06:18:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:27.219 [2024-11-20 06:18:59.035445] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:27.219 [2024-11-20 06:18:59.035522] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970654 ] 00:08:27.480 [2024-11-20 06:18:59.100192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:27.480 [2024-11-20 06:18:59.156512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:27.740 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:27.740 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@866 -- # return 0 00:08:27.740 06:18:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:08:27.740 06:18:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:08:27.740 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:27.740 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:27.740 { 00:08:27.740 "filename": "/tmp/spdk_mem_dump.txt" 00:08:27.740 } 00:08:27.740 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:27.740 06:18:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:08:27.740 DPDK memory size 818.000000 MiB in 1 heap(s) 00:08:27.740 1 heaps totaling size 818.000000 MiB 00:08:27.740 size: 818.000000 MiB heap id: 0 00:08:27.740 end heaps---------- 00:08:27.740 9 mempools totaling size 603.782043 MiB 00:08:27.740 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:08:27.740 size: 158.602051 MiB name: PDU_data_out_Pool 00:08:27.740 size: 100.555481 MiB name: bdev_io_1970654 00:08:27.740 size: 50.003479 MiB name: msgpool_1970654 00:08:27.740 size: 36.509338 MiB name: fsdev_io_1970654 00:08:27.740 size: 21.763794 MiB name: PDU_Pool 00:08:27.740 size: 19.513306 MiB name: SCSI_TASK_Pool 00:08:27.740 size: 4.133484 MiB name: evtpool_1970654 00:08:27.740 size: 0.026123 MiB name: Session_Pool 00:08:27.740 end mempools------- 00:08:27.740 6 memzones totaling size 4.142822 MiB 00:08:27.740 size: 1.000366 MiB name: RG_ring_0_1970654 00:08:27.740 size: 1.000366 MiB name: RG_ring_1_1970654 00:08:27.740 size: 1.000366 MiB name: RG_ring_4_1970654 00:08:27.740 size: 1.000366 MiB name: RG_ring_5_1970654 00:08:27.740 size: 0.125366 MiB name: RG_ring_2_1970654 00:08:27.740 size: 0.015991 MiB name: RG_ring_3_1970654 00:08:27.740 end memzones------- 00:08:27.740 06:18:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:08:27.740 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:08:27.740 list of free elements. size: 10.852478 MiB 00:08:27.740 element at address: 0x200019200000 with size: 0.999878 MiB 00:08:27.740 element at address: 0x200019400000 with size: 0.999878 MiB 00:08:27.740 element at address: 0x200000400000 with size: 0.998535 MiB 00:08:27.740 element at address: 0x200032000000 with size: 0.994446 MiB 00:08:27.740 element at address: 0x200006400000 with size: 0.959839 MiB 00:08:27.740 element at address: 0x200012c00000 with size: 0.944275 MiB 00:08:27.740 element at address: 0x200019600000 with size: 0.936584 MiB 00:08:27.740 element at address: 0x200000200000 with size: 0.717346 MiB 00:08:27.740 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:08:27.740 element at address: 0x200000c00000 with size: 0.495422 MiB 00:08:27.740 element at address: 0x20000a600000 with size: 0.490723 MiB 00:08:27.740 element at address: 0x200019800000 with size: 0.485657 MiB 00:08:27.740 element at address: 0x200003e00000 with size: 0.481934 MiB 00:08:27.740 element at address: 0x200028200000 with size: 0.410034 MiB 00:08:27.740 element at address: 0x200000800000 with size: 0.355042 MiB 00:08:27.740 list of standard malloc elements. size: 199.218628 MiB 00:08:27.740 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:08:27.740 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:08:27.740 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:08:27.740 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:08:27.740 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:08:27.740 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:08:27.740 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:08:27.740 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:08:27.740 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:08:27.740 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20000085b040 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20000085f300 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20000087f680 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:08:27.740 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:08:27.740 element at address: 0x200000cff000 with size: 0.000183 MiB 00:08:27.740 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:08:27.740 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:08:27.740 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:08:27.740 element at address: 0x200003efb980 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:08:27.740 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:08:27.740 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:08:27.740 element at address: 0x200028268f80 with size: 0.000183 MiB 00:08:27.740 element at address: 0x200028269040 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:08:27.740 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:08:27.740 list of memzone associated elements. size: 607.928894 MiB 00:08:27.740 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:08:27.740 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:08:27.740 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:08:27.740 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:08:27.740 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:08:27.740 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_1970654_0 00:08:27.740 element at address: 0x200000dff380 with size: 48.003052 MiB 00:08:27.740 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1970654_0 00:08:27.740 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:08:27.740 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1970654_0 00:08:27.740 element at address: 0x2000199be940 with size: 20.255554 MiB 00:08:27.740 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:08:27.740 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:08:27.740 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:08:27.740 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:08:27.740 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1970654_0 00:08:27.740 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:08:27.740 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1970654 00:08:27.740 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:08:27.740 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1970654 00:08:27.740 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:08:27.740 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:08:27.740 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:08:27.740 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:08:27.740 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:08:27.740 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:08:27.740 element at address: 0x200003efba40 with size: 1.008118 MiB 00:08:27.740 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:08:27.740 element at address: 0x200000cff180 with size: 1.000488 MiB 00:08:27.740 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1970654 00:08:27.740 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:08:27.740 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1970654 00:08:27.740 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:08:27.740 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1970654 00:08:27.740 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:08:27.740 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1970654 00:08:27.740 element at address: 0x20000087f740 with size: 0.500488 MiB 00:08:27.740 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1970654 00:08:27.740 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:08:27.740 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1970654 00:08:27.740 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:08:27.740 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:08:27.741 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:08:27.741 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:08:27.741 element at address: 0x20001987c540 with size: 0.250488 MiB 00:08:27.741 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:08:27.741 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:08:27.741 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1970654 00:08:27.741 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:08:27.741 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1970654 00:08:27.741 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:08:27.741 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:08:27.741 element at address: 0x200028269100 with size: 0.023743 MiB 00:08:27.741 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:08:27.741 element at address: 0x20000085b100 with size: 0.016113 MiB 00:08:27.741 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1970654 00:08:27.741 element at address: 0x20002826f240 with size: 0.002441 MiB 00:08:27.741 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:08:27.741 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:08:27.741 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1970654 00:08:27.741 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:08:27.741 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1970654 00:08:27.741 element at address: 0x20000085af00 with size: 0.000305 MiB 00:08:27.741 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1970654 00:08:27.741 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:08:27.741 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:08:27.741 06:18:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:08:27.741 06:18:59 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1970654 00:08:27.741 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@952 -- # '[' -z 1970654 ']' 00:08:27.741 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@956 -- # kill -0 1970654 00:08:27.741 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@957 -- # uname 00:08:27.741 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:27.741 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1970654 00:08:27.741 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:27.741 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:27.741 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1970654' 00:08:27.741 killing process with pid 1970654 00:08:27.741 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@971 -- # kill 1970654 00:08:27.741 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@976 -- # wait 1970654 00:08:28.305 00:08:28.305 real 0m1.143s 00:08:28.305 user 0m1.115s 00:08:28.305 sys 0m0.418s 00:08:28.305 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:28.305 06:18:59 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:08:28.305 ************************************ 00:08:28.305 END TEST dpdk_mem_utility 00:08:28.305 ************************************ 00:08:28.305 06:19:00 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:28.305 06:19:00 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:28.305 06:19:00 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.305 06:19:00 -- common/autotest_common.sh@10 -- # set +x 00:08:28.305 ************************************ 00:08:28.305 START TEST event 00:08:28.305 ************************************ 00:08:28.305 06:19:00 event -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:08:28.305 * Looking for test storage... 00:08:28.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:28.305 06:19:00 event -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:28.305 06:19:00 event -- common/autotest_common.sh@1691 -- # lcov --version 00:08:28.305 06:19:00 event -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:28.563 06:19:00 event -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:28.563 06:19:00 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:28.563 06:19:00 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:28.563 06:19:00 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:28.563 06:19:00 event -- scripts/common.sh@336 -- # IFS=.-: 00:08:28.563 06:19:00 event -- scripts/common.sh@336 -- # read -ra ver1 00:08:28.563 06:19:00 event -- scripts/common.sh@337 -- # IFS=.-: 00:08:28.563 06:19:00 event -- scripts/common.sh@337 -- # read -ra ver2 00:08:28.563 06:19:00 event -- scripts/common.sh@338 -- # local 'op=<' 00:08:28.563 06:19:00 event -- scripts/common.sh@340 -- # ver1_l=2 00:08:28.563 06:19:00 event -- scripts/common.sh@341 -- # ver2_l=1 00:08:28.563 06:19:00 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:28.563 06:19:00 event -- scripts/common.sh@344 -- # case "$op" in 00:08:28.563 06:19:00 event -- scripts/common.sh@345 -- # : 1 00:08:28.563 06:19:00 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:28.563 06:19:00 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:28.563 06:19:00 event -- scripts/common.sh@365 -- # decimal 1 00:08:28.563 06:19:00 event -- scripts/common.sh@353 -- # local d=1 00:08:28.563 06:19:00 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:28.563 06:19:00 event -- scripts/common.sh@355 -- # echo 1 00:08:28.563 06:19:00 event -- scripts/common.sh@365 -- # ver1[v]=1 00:08:28.563 06:19:00 event -- scripts/common.sh@366 -- # decimal 2 00:08:28.563 06:19:00 event -- scripts/common.sh@353 -- # local d=2 00:08:28.563 06:19:00 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:28.563 06:19:00 event -- scripts/common.sh@355 -- # echo 2 00:08:28.564 06:19:00 event -- scripts/common.sh@366 -- # ver2[v]=2 00:08:28.564 06:19:00 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:28.564 06:19:00 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:28.564 06:19:00 event -- scripts/common.sh@368 -- # return 0 00:08:28.564 06:19:00 event -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:28.564 06:19:00 event -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:28.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.564 --rc genhtml_branch_coverage=1 00:08:28.564 --rc genhtml_function_coverage=1 00:08:28.564 --rc genhtml_legend=1 00:08:28.564 --rc geninfo_all_blocks=1 00:08:28.564 --rc geninfo_unexecuted_blocks=1 00:08:28.564 00:08:28.564 ' 00:08:28.564 06:19:00 event -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:28.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.564 --rc genhtml_branch_coverage=1 00:08:28.564 --rc genhtml_function_coverage=1 00:08:28.564 --rc genhtml_legend=1 00:08:28.564 --rc geninfo_all_blocks=1 00:08:28.564 --rc geninfo_unexecuted_blocks=1 00:08:28.564 00:08:28.564 ' 00:08:28.564 06:19:00 event -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:28.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.564 --rc genhtml_branch_coverage=1 00:08:28.564 --rc genhtml_function_coverage=1 00:08:28.564 --rc genhtml_legend=1 00:08:28.564 --rc geninfo_all_blocks=1 00:08:28.564 --rc geninfo_unexecuted_blocks=1 00:08:28.564 00:08:28.564 ' 00:08:28.564 06:19:00 event -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:28.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:28.564 --rc genhtml_branch_coverage=1 00:08:28.564 --rc genhtml_function_coverage=1 00:08:28.564 --rc genhtml_legend=1 00:08:28.564 --rc geninfo_all_blocks=1 00:08:28.564 --rc geninfo_unexecuted_blocks=1 00:08:28.564 00:08:28.564 ' 00:08:28.564 06:19:00 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:08:28.564 06:19:00 event -- bdev/nbd_common.sh@6 -- # set -e 00:08:28.564 06:19:00 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:28.564 06:19:00 event -- common/autotest_common.sh@1103 -- # '[' 6 -le 1 ']' 00:08:28.564 06:19:00 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:28.564 06:19:00 event -- common/autotest_common.sh@10 -- # set +x 00:08:28.564 ************************************ 00:08:28.564 START TEST event_perf 00:08:28.564 ************************************ 00:08:28.564 06:19:00 event.event_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:08:28.564 Running I/O for 1 seconds...[2024-11-20 06:19:00.217343] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:28.564 [2024-11-20 06:19:00.217406] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1970852 ] 00:08:28.564 [2024-11-20 06:19:00.290930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:28.564 [2024-11-20 06:19:00.356920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:28.564 [2024-11-20 06:19:00.357031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:28.564 [2024-11-20 06:19:00.357128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.564 [2024-11-20 06:19:00.357124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:29.975 Running I/O for 1 seconds... 00:08:29.975 lcore 0: 228435 00:08:29.975 lcore 1: 228434 00:08:29.976 lcore 2: 228433 00:08:29.976 lcore 3: 228434 00:08:29.976 done. 00:08:29.976 00:08:29.976 real 0m1.218s 00:08:29.976 user 0m4.130s 00:08:29.976 sys 0m0.079s 00:08:29.976 06:19:01 event.event_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:29.976 06:19:01 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:08:29.976 ************************************ 00:08:29.976 END TEST event_perf 00:08:29.976 ************************************ 00:08:29.976 06:19:01 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:29.976 06:19:01 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:29.976 06:19:01 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:29.976 06:19:01 event -- common/autotest_common.sh@10 -- # set +x 00:08:29.976 ************************************ 00:08:29.976 START TEST event_reactor 00:08:29.976 ************************************ 00:08:29.976 06:19:01 event.event_reactor -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:08:29.976 [2024-11-20 06:19:01.489169] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:29.976 [2024-11-20 06:19:01.489238] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971009 ] 00:08:29.976 [2024-11-20 06:19:01.555344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.976 [2024-11-20 06:19:01.608749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.934 test_start 00:08:30.934 oneshot 00:08:30.934 tick 100 00:08:30.934 tick 100 00:08:30.934 tick 250 00:08:30.934 tick 100 00:08:30.934 tick 100 00:08:30.934 tick 100 00:08:30.934 tick 250 00:08:30.934 tick 500 00:08:30.934 tick 100 00:08:30.934 tick 100 00:08:30.934 tick 250 00:08:30.934 tick 100 00:08:30.934 tick 100 00:08:30.934 test_end 00:08:30.934 00:08:30.934 real 0m1.196s 00:08:30.934 user 0m1.128s 00:08:30.934 sys 0m0.064s 00:08:30.934 06:19:02 event.event_reactor -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:30.934 06:19:02 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:08:30.934 ************************************ 00:08:30.934 END TEST event_reactor 00:08:30.934 ************************************ 00:08:30.934 06:19:02 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:30.934 06:19:02 event -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:08:30.934 06:19:02 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:30.934 06:19:02 event -- common/autotest_common.sh@10 -- # set +x 00:08:30.934 ************************************ 00:08:30.934 START TEST event_reactor_perf 00:08:30.934 ************************************ 00:08:30.934 06:19:02 event.event_reactor_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:08:30.934 [2024-11-20 06:19:02.739913] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:30.934 [2024-11-20 06:19:02.739979] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971165 ] 00:08:31.192 [2024-11-20 06:19:02.804463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.192 [2024-11-20 06:19:02.860856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.127 test_start 00:08:32.127 test_end 00:08:32.127 Performance: 445620 events per second 00:08:32.127 00:08:32.127 real 0m1.196s 00:08:32.127 user 0m1.126s 00:08:32.127 sys 0m0.067s 00:08:32.127 06:19:03 event.event_reactor_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:32.127 06:19:03 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:08:32.127 ************************************ 00:08:32.127 END TEST event_reactor_perf 00:08:32.127 ************************************ 00:08:32.127 06:19:03 event -- event/event.sh@49 -- # uname -s 00:08:32.127 06:19:03 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:08:32.128 06:19:03 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:32.128 06:19:03 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:32.128 06:19:03 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.128 06:19:03 event -- common/autotest_common.sh@10 -- # set +x 00:08:32.386 ************************************ 00:08:32.386 START TEST event_scheduler 00:08:32.386 ************************************ 00:08:32.386 06:19:03 event.event_scheduler -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:08:32.386 * Looking for test storage... 00:08:32.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:08:32.386 06:19:04 event.event_scheduler -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:32.386 06:19:04 event.event_scheduler -- common/autotest_common.sh@1691 -- # lcov --version 00:08:32.386 06:19:04 event.event_scheduler -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:32.386 06:19:04 event.event_scheduler -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:32.386 06:19:04 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:08:32.386 06:19:04 event.event_scheduler -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:32.386 06:19:04 event.event_scheduler -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:32.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.386 --rc genhtml_branch_coverage=1 00:08:32.386 --rc genhtml_function_coverage=1 00:08:32.386 --rc genhtml_legend=1 00:08:32.386 --rc geninfo_all_blocks=1 00:08:32.386 --rc geninfo_unexecuted_blocks=1 00:08:32.386 00:08:32.386 ' 00:08:32.386 06:19:04 event.event_scheduler -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:32.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.386 --rc genhtml_branch_coverage=1 00:08:32.386 --rc genhtml_function_coverage=1 00:08:32.386 --rc genhtml_legend=1 00:08:32.386 --rc geninfo_all_blocks=1 00:08:32.386 --rc geninfo_unexecuted_blocks=1 00:08:32.386 00:08:32.386 ' 00:08:32.386 06:19:04 event.event_scheduler -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:32.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.386 --rc genhtml_branch_coverage=1 00:08:32.386 --rc genhtml_function_coverage=1 00:08:32.386 --rc genhtml_legend=1 00:08:32.386 --rc geninfo_all_blocks=1 00:08:32.386 --rc geninfo_unexecuted_blocks=1 00:08:32.386 00:08:32.386 ' 00:08:32.386 06:19:04 event.event_scheduler -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:32.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:32.386 --rc genhtml_branch_coverage=1 00:08:32.386 --rc genhtml_function_coverage=1 00:08:32.386 --rc genhtml_legend=1 00:08:32.386 --rc geninfo_all_blocks=1 00:08:32.386 --rc geninfo_unexecuted_blocks=1 00:08:32.386 00:08:32.386 ' 00:08:32.386 06:19:04 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:08:32.386 06:19:04 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1971476 00:08:32.386 06:19:04 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:08:32.386 06:19:04 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:08:32.386 06:19:04 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1971476 00:08:32.387 06:19:04 event.event_scheduler -- common/autotest_common.sh@833 -- # '[' -z 1971476 ']' 00:08:32.387 06:19:04 event.event_scheduler -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.387 06:19:04 event.event_scheduler -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:32.387 06:19:04 event.event_scheduler -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.387 06:19:04 event.event_scheduler -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:32.387 06:19:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:32.387 [2024-11-20 06:19:04.175610] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:32.387 [2024-11-20 06:19:04.175720] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971476 ] 00:08:32.645 [2024-11-20 06:19:04.248384] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:32.645 [2024-11-20 06:19:04.310113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:32.645 [2024-11-20 06:19:04.310221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:32.645 [2024-11-20 06:19:04.310342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:32.645 [2024-11-20 06:19:04.310346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:32.645 06:19:04 event.event_scheduler -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:32.645 06:19:04 event.event_scheduler -- common/autotest_common.sh@866 -- # return 0 00:08:32.645 06:19:04 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:08:32.645 06:19:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.645 06:19:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:32.645 [2024-11-20 06:19:04.407146] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:08:32.645 [2024-11-20 06:19:04.407173] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:08:32.645 [2024-11-20 06:19:04.407190] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:08:32.646 [2024-11-20 06:19:04.407200] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:08:32.646 [2024-11-20 06:19:04.407210] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:08:32.646 06:19:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.646 06:19:04 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:08:32.646 06:19:04 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.646 06:19:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 [2024-11-20 06:19:04.513297] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:08:32.905 06:19:04 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:08:32.905 06:19:04 event.event_scheduler -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:32.905 06:19:04 event.event_scheduler -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 ************************************ 00:08:32.905 START TEST scheduler_create_thread 00:08:32.905 ************************************ 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1127 -- # scheduler_create_thread 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 2 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 3 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 4 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 5 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 6 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 7 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 8 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 9 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 10 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:32.905 06:19:04 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.471 06:19:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:33.471 00:08:33.471 real 0m0.591s 00:08:33.471 user 0m0.013s 00:08:33.471 sys 0m0.003s 00:08:33.471 06:19:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:33.471 06:19:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:08:33.471 ************************************ 00:08:33.471 END TEST scheduler_create_thread 00:08:33.471 ************************************ 00:08:33.471 06:19:05 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:08:33.471 06:19:05 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1971476 00:08:33.471 06:19:05 event.event_scheduler -- common/autotest_common.sh@952 -- # '[' -z 1971476 ']' 00:08:33.472 06:19:05 event.event_scheduler -- common/autotest_common.sh@956 -- # kill -0 1971476 00:08:33.472 06:19:05 event.event_scheduler -- common/autotest_common.sh@957 -- # uname 00:08:33.472 06:19:05 event.event_scheduler -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:33.472 06:19:05 event.event_scheduler -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1971476 00:08:33.472 06:19:05 event.event_scheduler -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:08:33.472 06:19:05 event.event_scheduler -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:08:33.472 06:19:05 event.event_scheduler -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1971476' 00:08:33.472 killing process with pid 1971476 00:08:33.472 06:19:05 event.event_scheduler -- common/autotest_common.sh@971 -- # kill 1971476 00:08:33.472 06:19:05 event.event_scheduler -- common/autotest_common.sh@976 -- # wait 1971476 00:08:34.038 [2024-11-20 06:19:05.613523] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:08:34.038 00:08:34.038 real 0m1.861s 00:08:34.038 user 0m2.475s 00:08:34.038 sys 0m0.383s 00:08:34.038 06:19:05 event.event_scheduler -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:34.038 06:19:05 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:08:34.038 ************************************ 00:08:34.038 END TEST event_scheduler 00:08:34.038 ************************************ 00:08:34.038 06:19:05 event -- event/event.sh@51 -- # modprobe -n nbd 00:08:34.038 06:19:05 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:08:34.038 06:19:05 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:34.038 06:19:05 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:34.038 06:19:05 event -- common/autotest_common.sh@10 -- # set +x 00:08:34.296 ************************************ 00:08:34.296 START TEST app_repeat 00:08:34.296 ************************************ 00:08:34.296 06:19:05 event.app_repeat -- common/autotest_common.sh@1127 -- # app_repeat_test 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1971667 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1971667' 00:08:34.296 Process app_repeat pid: 1971667 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:08:34.296 spdk_app_start Round 0 00:08:34.296 06:19:05 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1971667 /var/tmp/spdk-nbd.sock 00:08:34.296 06:19:05 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1971667 ']' 00:08:34.296 06:19:05 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:34.296 06:19:05 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:34.296 06:19:05 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:34.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:34.296 06:19:05 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:34.296 06:19:05 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:34.297 [2024-11-20 06:19:05.922994] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:34.297 [2024-11-20 06:19:05.923061] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1971667 ] 00:08:34.297 [2024-11-20 06:19:05.990454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:34.297 [2024-11-20 06:19:06.045721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.297 [2024-11-20 06:19:06.045726] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.554 06:19:06 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:34.554 06:19:06 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:34.554 06:19:06 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:34.812 Malloc0 00:08:34.812 06:19:06 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:35.071 Malloc1 00:08:35.071 06:19:06 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.071 06:19:06 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:35.329 /dev/nbd0 00:08:35.329 06:19:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:35.329 06:19:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.329 1+0 records in 00:08:35.329 1+0 records out 00:08:35.329 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000184769 s, 22.2 MB/s 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:35.329 06:19:07 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:35.329 06:19:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.329 06:19:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.329 06:19:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:35.588 /dev/nbd1 00:08:35.588 06:19:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:35.588 06:19:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:35.588 1+0 records in 00:08:35.588 1+0 records out 00:08:35.588 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225499 s, 18.2 MB/s 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:35.588 06:19:07 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:35.588 06:19:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:35.588 06:19:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:35.588 06:19:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:35.588 06:19:07 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:35.588 06:19:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:36.153 { 00:08:36.153 "nbd_device": "/dev/nbd0", 00:08:36.153 "bdev_name": "Malloc0" 00:08:36.153 }, 00:08:36.153 { 00:08:36.153 "nbd_device": "/dev/nbd1", 00:08:36.153 "bdev_name": "Malloc1" 00:08:36.153 } 00:08:36.153 ]' 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:36.153 { 00:08:36.153 "nbd_device": "/dev/nbd0", 00:08:36.153 "bdev_name": "Malloc0" 00:08:36.153 }, 00:08:36.153 { 00:08:36.153 "nbd_device": "/dev/nbd1", 00:08:36.153 "bdev_name": "Malloc1" 00:08:36.153 } 00:08:36.153 ]' 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:36.153 /dev/nbd1' 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:36.153 /dev/nbd1' 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:36.153 256+0 records in 00:08:36.153 256+0 records out 00:08:36.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00511076 s, 205 MB/s 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:36.153 256+0 records in 00:08:36.153 256+0 records out 00:08:36.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0201751 s, 52.0 MB/s 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:36.153 256+0 records in 00:08:36.153 256+0 records out 00:08:36.153 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0219108 s, 47.9 MB/s 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:36.153 06:19:07 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:36.154 06:19:07 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.154 06:19:07 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:36.412 06:19:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:36.412 06:19:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:36.412 06:19:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:36.412 06:19:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.412 06:19:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.412 06:19:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:36.412 06:19:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:36.412 06:19:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.412 06:19:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:36.412 06:19:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:36.671 06:19:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:36.929 06:19:08 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:36.929 06:19:08 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:37.188 06:19:08 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:37.446 [2024-11-20 06:19:09.212818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:37.446 [2024-11-20 06:19:09.267135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:37.446 [2024-11-20 06:19:09.267135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:37.704 [2024-11-20 06:19:09.327757] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:37.704 [2024-11-20 06:19:09.327822] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:40.231 06:19:11 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:40.231 06:19:11 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:08:40.231 spdk_app_start Round 1 00:08:40.231 06:19:11 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1971667 /var/tmp/spdk-nbd.sock 00:08:40.231 06:19:11 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1971667 ']' 00:08:40.231 06:19:11 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:40.231 06:19:11 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:40.231 06:19:11 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:40.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:40.231 06:19:11 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:40.231 06:19:11 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:40.489 06:19:12 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:40.489 06:19:12 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:40.489 06:19:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:40.746 Malloc0 00:08:40.747 06:19:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:41.005 Malloc1 00:08:41.005 06:19:12 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.005 06:19:12 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:41.571 /dev/nbd0 00:08:41.571 06:19:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:41.571 06:19:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.571 1+0 records in 00:08:41.571 1+0 records out 00:08:41.571 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000233128 s, 17.6 MB/s 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:41.571 06:19:13 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:41.571 06:19:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.571 06:19:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.571 06:19:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:41.829 /dev/nbd1 00:08:41.829 06:19:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:41.829 06:19:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:41.829 1+0 records in 00:08:41.829 1+0 records out 00:08:41.829 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183061 s, 22.4 MB/s 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:41.829 06:19:13 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:41.829 06:19:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:41.829 06:19:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:41.829 06:19:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:41.829 06:19:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:41.829 06:19:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:42.087 { 00:08:42.087 "nbd_device": "/dev/nbd0", 00:08:42.087 "bdev_name": "Malloc0" 00:08:42.087 }, 00:08:42.087 { 00:08:42.087 "nbd_device": "/dev/nbd1", 00:08:42.087 "bdev_name": "Malloc1" 00:08:42.087 } 00:08:42.087 ]' 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:42.087 { 00:08:42.087 "nbd_device": "/dev/nbd0", 00:08:42.087 "bdev_name": "Malloc0" 00:08:42.087 }, 00:08:42.087 { 00:08:42.087 "nbd_device": "/dev/nbd1", 00:08:42.087 "bdev_name": "Malloc1" 00:08:42.087 } 00:08:42.087 ]' 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:42.087 /dev/nbd1' 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:42.087 /dev/nbd1' 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:42.087 256+0 records in 00:08:42.087 256+0 records out 00:08:42.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00545637 s, 192 MB/s 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:42.087 256+0 records in 00:08:42.087 256+0 records out 00:08:42.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0196588 s, 53.3 MB/s 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:42.087 06:19:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:42.087 256+0 records in 00:08:42.087 256+0 records out 00:08:42.087 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0214632 s, 48.9 MB/s 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.088 06:19:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:42.346 06:19:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:42.346 06:19:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:42.346 06:19:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:42.346 06:19:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.346 06:19:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.346 06:19:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:42.346 06:19:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:42.346 06:19:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.346 06:19:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:42.346 06:19:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:42.912 06:19:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:43.170 06:19:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:43.170 06:19:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:43.170 06:19:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:43.170 06:19:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:43.170 06:19:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:43.170 06:19:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:43.170 06:19:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:43.170 06:19:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:43.170 06:19:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:43.170 06:19:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:43.427 06:19:15 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:43.686 [2024-11-20 06:19:15.288935] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:43.686 [2024-11-20 06:19:15.344242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:43.686 [2024-11-20 06:19:15.344242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:43.686 [2024-11-20 06:19:15.404506] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:43.686 [2024-11-20 06:19:15.404591] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:46.966 06:19:18 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:08:46.966 06:19:18 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:08:46.966 spdk_app_start Round 2 00:08:46.966 06:19:18 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1971667 /var/tmp/spdk-nbd.sock 00:08:46.966 06:19:18 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1971667 ']' 00:08:46.966 06:19:18 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:46.966 06:19:18 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:46.966 06:19:18 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:46.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:46.966 06:19:18 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:46.966 06:19:18 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:46.966 06:19:18 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:46.966 06:19:18 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:46.966 06:19:18 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:46.966 Malloc0 00:08:46.966 06:19:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:08:47.224 Malloc1 00:08:47.224 06:19:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:47.224 06:19:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.224 06:19:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:47.224 06:19:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:08:47.224 06:19:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.224 06:19:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:08:47.224 06:19:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:08:47.224 06:19:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.224 06:19:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:08:47.224 06:19:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:47.225 06:19:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:47.225 06:19:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:47.225 06:19:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:08:47.225 06:19:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:47.225 06:19:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.225 06:19:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:08:47.483 /dev/nbd0 00:08:47.483 06:19:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:47.483 06:19:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd0 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd0 /proc/partitions 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.483 1+0 records in 00:08:47.483 1+0 records out 00:08:47.483 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0226854 s, 181 kB/s 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:47.483 06:19:19 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:47.483 06:19:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.483 06:19:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.483 06:19:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:08:47.741 /dev/nbd1 00:08:47.741 06:19:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:08:47.741 06:19:19 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:08:47.741 06:19:19 event.app_repeat -- common/autotest_common.sh@870 -- # local nbd_name=nbd1 00:08:47.741 06:19:19 event.app_repeat -- common/autotest_common.sh@871 -- # local i 00:08:47.741 06:19:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i = 1 )) 00:08:47.741 06:19:19 event.app_repeat -- common/autotest_common.sh@873 -- # (( i <= 20 )) 00:08:47.741 06:19:19 event.app_repeat -- common/autotest_common.sh@874 -- # grep -q -w nbd1 /proc/partitions 00:08:47.741 06:19:19 event.app_repeat -- common/autotest_common.sh@875 -- # break 00:08:47.741 06:19:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i = 1 )) 00:08:47.741 06:19:19 event.app_repeat -- common/autotest_common.sh@886 -- # (( i <= 20 )) 00:08:47.741 06:19:19 event.app_repeat -- common/autotest_common.sh@887 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:08:47.999 1+0 records in 00:08:47.999 1+0 records out 00:08:47.999 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000213446 s, 19.2 MB/s 00:08:47.999 06:19:19 event.app_repeat -- common/autotest_common.sh@888 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:47.999 06:19:19 event.app_repeat -- common/autotest_common.sh@888 -- # size=4096 00:08:47.999 06:19:19 event.app_repeat -- common/autotest_common.sh@889 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:08:47.999 06:19:19 event.app_repeat -- common/autotest_common.sh@890 -- # '[' 4096 '!=' 0 ']' 00:08:47.999 06:19:19 event.app_repeat -- common/autotest_common.sh@891 -- # return 0 00:08:47.999 06:19:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:47.999 06:19:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:08:47.999 06:19:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:47.999 06:19:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:47.999 06:19:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:48.257 { 00:08:48.257 "nbd_device": "/dev/nbd0", 00:08:48.257 "bdev_name": "Malloc0" 00:08:48.257 }, 00:08:48.257 { 00:08:48.257 "nbd_device": "/dev/nbd1", 00:08:48.257 "bdev_name": "Malloc1" 00:08:48.257 } 00:08:48.257 ]' 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:48.257 { 00:08:48.257 "nbd_device": "/dev/nbd0", 00:08:48.257 "bdev_name": "Malloc0" 00:08:48.257 }, 00:08:48.257 { 00:08:48.257 "nbd_device": "/dev/nbd1", 00:08:48.257 "bdev_name": "Malloc1" 00:08:48.257 } 00:08:48.257 ]' 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:08:48.257 /dev/nbd1' 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:08:48.257 /dev/nbd1' 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:08:48.257 06:19:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:08:48.258 256+0 records in 00:08:48.258 256+0 records out 00:08:48.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00387792 s, 270 MB/s 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:08:48.258 256+0 records in 00:08:48.258 256+0 records out 00:08:48.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0194541 s, 53.9 MB/s 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:08:48.258 256+0 records in 00:08:48.258 256+0 records out 00:08:48.258 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0216865 s, 48.4 MB/s 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.258 06:19:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:08:48.515 06:19:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:48.515 06:19:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:48.515 06:19:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:48.515 06:19:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.515 06:19:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.515 06:19:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:48.515 06:19:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:48.515 06:19:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.515 06:19:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:48.515 06:19:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:08:48.773 06:19:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:08:49.338 06:19:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:49.338 06:19:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:49.338 06:19:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:49.338 06:19:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:49.339 06:19:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:49.339 06:19:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:49.339 06:19:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:08:49.339 06:19:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:08:49.339 06:19:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:49.339 06:19:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:08:49.339 06:19:20 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:08:49.339 06:19:20 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:08:49.339 06:19:20 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:08:49.596 06:19:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:08:49.855 [2024-11-20 06:19:21.462320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:49.855 [2024-11-20 06:19:21.517557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.855 [2024-11-20 06:19:21.517562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.855 [2024-11-20 06:19:21.577299] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:08:49.855 [2024-11-20 06:19:21.577369] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:08:53.135 06:19:24 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1971667 /var/tmp/spdk-nbd.sock 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@833 -- # '[' -z 1971667 ']' 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:08:53.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@866 -- # return 0 00:08:53.135 06:19:24 event.app_repeat -- event/event.sh@39 -- # killprocess 1971667 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@952 -- # '[' -z 1971667 ']' 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@956 -- # kill -0 1971667 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@957 -- # uname 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1971667 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1971667' 00:08:53.135 killing process with pid 1971667 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@971 -- # kill 1971667 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@976 -- # wait 1971667 00:08:53.135 spdk_app_start is called in Round 0. 00:08:53.135 Shutdown signal received, stop current app iteration 00:08:53.135 Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 reinitialization... 00:08:53.135 spdk_app_start is called in Round 1. 00:08:53.135 Shutdown signal received, stop current app iteration 00:08:53.135 Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 reinitialization... 00:08:53.135 spdk_app_start is called in Round 2. 00:08:53.135 Shutdown signal received, stop current app iteration 00:08:53.135 Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 reinitialization... 00:08:53.135 spdk_app_start is called in Round 3. 00:08:53.135 Shutdown signal received, stop current app iteration 00:08:53.135 06:19:24 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:08:53.135 06:19:24 event.app_repeat -- event/event.sh@42 -- # return 0 00:08:53.135 00:08:53.135 real 0m18.866s 00:08:53.135 user 0m41.642s 00:08:53.135 sys 0m3.272s 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:53.135 06:19:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:08:53.135 ************************************ 00:08:53.135 END TEST app_repeat 00:08:53.135 ************************************ 00:08:53.135 06:19:24 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:08:53.135 06:19:24 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:53.135 06:19:24 event -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:53.136 06:19:24 event -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.136 06:19:24 event -- common/autotest_common.sh@10 -- # set +x 00:08:53.136 ************************************ 00:08:53.136 START TEST cpu_locks 00:08:53.136 ************************************ 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:08:53.136 * Looking for test storage... 00:08:53.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1691 -- # lcov --version 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:53.136 06:19:24 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:08:53.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.136 --rc genhtml_branch_coverage=1 00:08:53.136 --rc genhtml_function_coverage=1 00:08:53.136 --rc genhtml_legend=1 00:08:53.136 --rc geninfo_all_blocks=1 00:08:53.136 --rc geninfo_unexecuted_blocks=1 00:08:53.136 00:08:53.136 ' 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:08:53.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.136 --rc genhtml_branch_coverage=1 00:08:53.136 --rc genhtml_function_coverage=1 00:08:53.136 --rc genhtml_legend=1 00:08:53.136 --rc geninfo_all_blocks=1 00:08:53.136 --rc geninfo_unexecuted_blocks=1 00:08:53.136 00:08:53.136 ' 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:08:53.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.136 --rc genhtml_branch_coverage=1 00:08:53.136 --rc genhtml_function_coverage=1 00:08:53.136 --rc genhtml_legend=1 00:08:53.136 --rc geninfo_all_blocks=1 00:08:53.136 --rc geninfo_unexecuted_blocks=1 00:08:53.136 00:08:53.136 ' 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:08:53.136 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:53.136 --rc genhtml_branch_coverage=1 00:08:53.136 --rc genhtml_function_coverage=1 00:08:53.136 --rc genhtml_legend=1 00:08:53.136 --rc geninfo_all_blocks=1 00:08:53.136 --rc geninfo_unexecuted_blocks=1 00:08:53.136 00:08:53.136 ' 00:08:53.136 06:19:24 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:08:53.136 06:19:24 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:08:53.136 06:19:24 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:08:53.136 06:19:24 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:53.136 06:19:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:53.394 ************************************ 00:08:53.394 START TEST default_locks 00:08:53.394 ************************************ 00:08:53.394 06:19:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1127 -- # default_locks 00:08:53.394 06:19:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1974174 00:08:53.394 06:19:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:53.394 06:19:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1974174 00:08:53.394 06:19:24 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1974174 ']' 00:08:53.394 06:19:24 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.394 06:19:24 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:53.394 06:19:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.394 06:19:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:53.394 06:19:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:53.394 [2024-11-20 06:19:25.045695] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:53.394 [2024-11-20 06:19:25.045773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974174 ] 00:08:53.395 [2024-11-20 06:19:25.113853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:53.395 [2024-11-20 06:19:25.175871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.652 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:53.652 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 0 00:08:53.652 06:19:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1974174 00:08:53.652 06:19:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1974174 00:08:53.652 06:19:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:53.909 lslocks: write error 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1974174 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # '[' -z 1974174 ']' 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # kill -0 1974174 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # uname 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1974174 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1974174' 00:08:53.909 killing process with pid 1974174 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@971 -- # kill 1974174 00:08:53.909 06:19:25 event.cpu_locks.default_locks -- common/autotest_common.sh@976 -- # wait 1974174 00:08:54.474 06:19:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1974174 00:08:54.474 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:08:54.474 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1974174 00:08:54.474 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:08:54.474 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.474 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:08:54.474 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:08:54.474 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 1974174 00:08:54.474 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # '[' -z 1974174 ']' 00:08:54.474 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:54.475 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1974174) - No such process 00:08:54.475 ERROR: process (pid: 1974174) is no longer running 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@866 -- # return 1 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:54.475 00:08:54.475 real 0m1.146s 00:08:54.475 user 0m1.086s 00:08:54.475 sys 0m0.534s 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:54.475 06:19:26 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:08:54.475 ************************************ 00:08:54.475 END TEST default_locks 00:08:54.475 ************************************ 00:08:54.475 06:19:26 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:08:54.475 06:19:26 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:54.475 06:19:26 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:54.475 06:19:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:54.475 ************************************ 00:08:54.475 START TEST default_locks_via_rpc 00:08:54.475 ************************************ 00:08:54.475 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1127 -- # default_locks_via_rpc 00:08:54.475 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1974339 00:08:54.475 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:54.475 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1974339 00:08:54.475 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1974339 ']' 00:08:54.475 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.475 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:54.475 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.475 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:54.475 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.475 [2024-11-20 06:19:26.244478] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:54.475 [2024-11-20 06:19:26.244545] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974339 ] 00:08:54.475 [2024-11-20 06:19:26.306261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.732 [2024-11-20 06:19:26.362463] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1974339 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1974339 00:08:54.990 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1974339 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # '[' -z 1974339 ']' 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # kill -0 1974339 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # uname 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1974339 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1974339' 00:08:55.247 killing process with pid 1974339 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@971 -- # kill 1974339 00:08:55.247 06:19:26 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@976 -- # wait 1974339 00:08:55.814 00:08:55.814 real 0m1.155s 00:08:55.814 user 0m1.112s 00:08:55.814 sys 0m0.495s 00:08:55.814 06:19:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:55.814 06:19:27 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:08:55.814 ************************************ 00:08:55.814 END TEST default_locks_via_rpc 00:08:55.814 ************************************ 00:08:55.814 06:19:27 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:08:55.814 06:19:27 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:55.814 06:19:27 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:55.814 06:19:27 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:55.814 ************************************ 00:08:55.814 START TEST non_locking_app_on_locked_coremask 00:08:55.814 ************************************ 00:08:55.814 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # non_locking_app_on_locked_coremask 00:08:55.814 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1974509 00:08:55.814 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:08:55.814 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1974509 /var/tmp/spdk.sock 00:08:55.814 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1974509 ']' 00:08:55.814 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:55.814 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:55.814 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:55.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:55.814 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:55.814 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:55.814 [2024-11-20 06:19:27.452031] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:55.814 [2024-11-20 06:19:27.452143] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974509 ] 00:08:55.814 [2024-11-20 06:19:27.516491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:55.814 [2024-11-20 06:19:27.572458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.071 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:56.071 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:56.071 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1974627 00:08:56.071 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1974627 /var/tmp/spdk2.sock 00:08:56.072 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:08:56.072 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1974627 ']' 00:08:56.072 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:56.072 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:56.072 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:56.072 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:56.072 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:56.072 06:19:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:56.072 [2024-11-20 06:19:27.900553] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:56.072 [2024-11-20 06:19:27.900669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974627 ] 00:08:56.329 [2024-11-20 06:19:27.996714] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:56.329 [2024-11-20 06:19:27.996755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.329 [2024-11-20 06:19:28.113887] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.261 06:19:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:57.261 06:19:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:57.261 06:19:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1974509 00:08:57.261 06:19:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1974509 00:08:57.261 06:19:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:08:57.828 lslocks: write error 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1974509 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1974509 ']' 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1974509 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1974509 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1974509' 00:08:57.828 killing process with pid 1974509 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1974509 00:08:57.828 06:19:29 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1974509 00:08:58.394 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1974627 00:08:58.394 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1974627 ']' 00:08:58.394 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1974627 00:08:58.653 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:08:58.653 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:08:58.653 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1974627 00:08:58.653 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:08:58.653 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:08:58.653 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1974627' 00:08:58.653 killing process with pid 1974627 00:08:58.653 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1974627 00:08:58.654 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1974627 00:08:58.915 00:08:58.915 real 0m3.288s 00:08:58.915 user 0m3.520s 00:08:58.915 sys 0m1.069s 00:08:58.915 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:08:58.915 06:19:30 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:58.915 ************************************ 00:08:58.915 END TEST non_locking_app_on_locked_coremask 00:08:58.915 ************************************ 00:08:58.915 06:19:30 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:08:58.915 06:19:30 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:08:58.915 06:19:30 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:08:58.915 06:19:30 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:08:58.915 ************************************ 00:08:58.915 START TEST locking_app_on_unlocked_coremask 00:08:58.915 ************************************ 00:08:58.915 06:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_unlocked_coremask 00:08:58.915 06:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1974943 00:08:58.915 06:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:08:58.915 06:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1974943 /var/tmp/spdk.sock 00:08:58.915 06:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1974943 ']' 00:08:58.915 06:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:58.915 06:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:58.915 06:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:58.915 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:58.915 06:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:58.915 06:19:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:59.173 [2024-11-20 06:19:30.790431] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:59.173 [2024-11-20 06:19:30.790547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1974943 ] 00:08:59.174 [2024-11-20 06:19:30.855435] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:08:59.174 [2024-11-20 06:19:30.855473] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.174 [2024-11-20 06:19:30.908757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1975061 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1975061 /var/tmp/spdk2.sock 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1975061 ']' 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:08:59.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:08:59.432 06:19:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:08:59.432 [2024-11-20 06:19:31.238125] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:08:59.432 [2024-11-20 06:19:31.238228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975061 ] 00:08:59.690 [2024-11-20 06:19:31.334461] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:59.690 [2024-11-20 06:19:31.446873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.624 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:00.624 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:00.624 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1975061 00:09:00.624 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1975061 00:09:00.624 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:00.882 lslocks: write error 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1974943 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1974943 ']' 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1974943 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1974943 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1974943' 00:09:00.882 killing process with pid 1974943 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1974943 00:09:00.882 06:19:32 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1974943 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1975061 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1975061 ']' 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # kill -0 1975061 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1975061 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1975061' 00:09:01.817 killing process with pid 1975061 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@971 -- # kill 1975061 00:09:01.817 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@976 -- # wait 1975061 00:09:02.075 00:09:02.075 real 0m3.143s 00:09:02.075 user 0m3.386s 00:09:02.075 sys 0m1.007s 00:09:02.075 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:02.075 06:19:33 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:02.075 ************************************ 00:09:02.075 END TEST locking_app_on_unlocked_coremask 00:09:02.075 ************************************ 00:09:02.075 06:19:33 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:09:02.075 06:19:33 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:02.075 06:19:33 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:02.075 06:19:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:02.368 ************************************ 00:09:02.368 START TEST locking_app_on_locked_coremask 00:09:02.368 ************************************ 00:09:02.368 06:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1127 -- # locking_app_on_locked_coremask 00:09:02.368 06:19:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1975376 00:09:02.368 06:19:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:09:02.368 06:19:33 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1975376 /var/tmp/spdk.sock 00:09:02.368 06:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1975376 ']' 00:09:02.368 06:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:02.368 06:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:02.368 06:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:02.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:02.368 06:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:02.368 06:19:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:02.368 [2024-11-20 06:19:33.980884] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:02.368 [2024-11-20 06:19:33.980979] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975376 ] 00:09:02.368 [2024-11-20 06:19:34.045487] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.368 [2024-11-20 06:19:34.099829] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1975482 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1975482 /var/tmp/spdk2.sock 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1975482 /var/tmp/spdk2.sock 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1975482 /var/tmp/spdk2.sock 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # '[' -z 1975482 ']' 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:02.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:02.653 06:19:34 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:02.653 [2024-11-20 06:19:34.423521] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:02.653 [2024-11-20 06:19:34.423635] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975482 ] 00:09:02.912 [2024-11-20 06:19:34.528572] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1975376 has claimed it. 00:09:02.912 [2024-11-20 06:19:34.528643] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:03.477 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1975482) - No such process 00:09:03.477 ERROR: process (pid: 1975482) is no longer running 00:09:03.477 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:03.477 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@866 -- # return 1 00:09:03.477 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:03.477 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:03.477 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:03.477 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:03.477 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1975376 00:09:03.477 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1975376 00:09:03.477 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:09:03.735 lslocks: write error 00:09:03.735 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1975376 00:09:03.735 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # '[' -z 1975376 ']' 00:09:03.735 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # kill -0 1975376 00:09:03.735 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # uname 00:09:03.735 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:03.736 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1975376 00:09:03.736 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:03.736 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:03.736 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1975376' 00:09:03.736 killing process with pid 1975376 00:09:03.736 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@971 -- # kill 1975376 00:09:03.736 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@976 -- # wait 1975376 00:09:04.301 00:09:04.301 real 0m2.043s 00:09:04.301 user 0m2.246s 00:09:04.301 sys 0m0.635s 00:09:04.301 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:04.301 06:19:35 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:04.301 ************************************ 00:09:04.301 END TEST locking_app_on_locked_coremask 00:09:04.301 ************************************ 00:09:04.301 06:19:35 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:09:04.301 06:19:35 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:04.301 06:19:35 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:04.301 06:19:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:04.301 ************************************ 00:09:04.301 START TEST locking_overlapped_coremask 00:09:04.301 ************************************ 00:09:04.301 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask 00:09:04.301 06:19:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1975669 00:09:04.301 06:19:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:09:04.301 06:19:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1975669 /var/tmp/spdk.sock 00:09:04.301 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1975669 ']' 00:09:04.301 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:04.301 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:04.301 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:04.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:04.301 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:04.301 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:04.301 [2024-11-20 06:19:36.077230] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:04.301 [2024-11-20 06:19:36.077319] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975669 ] 00:09:04.558 [2024-11-20 06:19:36.147734] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:04.558 [2024-11-20 06:19:36.209183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:04.558 [2024-11-20 06:19:36.209250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:04.558 [2024-11-20 06:19:36.209253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:04.815 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:04.815 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 0 00:09:04.815 06:19:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1975694 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1975694 /var/tmp/spdk2.sock 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 1975694 /var/tmp/spdk2.sock 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 1975694 /var/tmp/spdk2.sock 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # '[' -z 1975694 ']' 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:04.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:04.816 06:19:36 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:04.816 [2024-11-20 06:19:36.558165] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:04.816 [2024-11-20 06:19:36.558260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975694 ] 00:09:05.072 [2024-11-20 06:19:36.667187] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1975669 has claimed it. 00:09:05.072 [2024-11-20 06:19:36.667249] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:09:05.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 848: kill: (1975694) - No such process 00:09:05.635 ERROR: process (pid: 1975694) is no longer running 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@866 -- # return 1 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1975669 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # '[' -z 1975669 ']' 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # kill -0 1975669 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # uname 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1975669 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1975669' 00:09:05.635 killing process with pid 1975669 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@971 -- # kill 1975669 00:09:05.635 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@976 -- # wait 1975669 00:09:06.199 00:09:06.199 real 0m1.713s 00:09:06.199 user 0m4.758s 00:09:06.199 sys 0m0.495s 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:09:06.199 ************************************ 00:09:06.199 END TEST locking_overlapped_coremask 00:09:06.199 ************************************ 00:09:06.199 06:19:37 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:09:06.199 06:19:37 event.cpu_locks -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:06.199 06:19:37 event.cpu_locks -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:06.199 06:19:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:06.199 ************************************ 00:09:06.199 START TEST locking_overlapped_coremask_via_rpc 00:09:06.199 ************************************ 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1127 -- # locking_overlapped_coremask_via_rpc 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1975969 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1975969 /var/tmp/spdk.sock 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1975969 ']' 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:06.199 06:19:37 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.199 [2024-11-20 06:19:37.841210] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:06.200 [2024-11-20 06:19:37.841288] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975969 ] 00:09:06.200 [2024-11-20 06:19:37.905799] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:06.200 [2024-11-20 06:19:37.905843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.200 [2024-11-20 06:19:37.969123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.200 [2024-11-20 06:19:37.969193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.200 [2024-11-20 06:19:37.969196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1975988 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1975988 /var/tmp/spdk2.sock 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1975988 ']' 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:06.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:06.457 06:19:38 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:06.715 [2024-11-20 06:19:38.312701] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:06.715 [2024-11-20 06:19:38.312807] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1975988 ] 00:09:06.715 [2024-11-20 06:19:38.418741] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:09:06.715 [2024-11-20 06:19:38.418785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:06.715 [2024-11-20 06:19:38.545548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.715 [2024-11-20 06:19:38.545585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:06.715 [2024-11-20 06:19:38.545587] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:07.648 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:07.648 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:07.648 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:09:07.648 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.648 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.648 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:07.648 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:07.648 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:09:07.648 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:07.648 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.649 [2024-11-20 06:19:39.318402] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1975969 has claimed it. 00:09:07.649 request: 00:09:07.649 { 00:09:07.649 "method": "framework_enable_cpumask_locks", 00:09:07.649 "req_id": 1 00:09:07.649 } 00:09:07.649 Got JSON-RPC error response 00:09:07.649 response: 00:09:07.649 { 00:09:07.649 "code": -32603, 00:09:07.649 "message": "Failed to claim CPU core: 2" 00:09:07.649 } 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1975969 /var/tmp/spdk.sock 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1975969 ']' 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:07.649 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:07.649 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:07.907 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:07.907 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:07.907 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1975988 /var/tmp/spdk2.sock 00:09:07.907 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # '[' -z 1975988 ']' 00:09:07.907 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk2.sock 00:09:07.907 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:07.907 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:09:07.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:09:07.907 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:07.907 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.165 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:08.165 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@866 -- # return 0 00:09:08.165 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:09:08.165 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:09:08.165 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:09:08.165 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:09:08.165 00:09:08.165 real 0m2.090s 00:09:08.165 user 0m1.156s 00:09:08.165 sys 0m0.172s 00:09:08.165 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:08.165 06:19:39 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:08.165 ************************************ 00:09:08.165 END TEST locking_overlapped_coremask_via_rpc 00:09:08.165 ************************************ 00:09:08.165 06:19:39 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:09:08.165 06:19:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1975969 ]] 00:09:08.165 06:19:39 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1975969 00:09:08.165 06:19:39 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1975969 ']' 00:09:08.165 06:19:39 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1975969 00:09:08.165 06:19:39 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:09:08.165 06:19:39 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:08.165 06:19:39 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1975969 00:09:08.165 06:19:39 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:08.165 06:19:39 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:08.165 06:19:39 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1975969' 00:09:08.165 killing process with pid 1975969 00:09:08.165 06:19:39 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1975969 00:09:08.165 06:19:39 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1975969 00:09:08.731 06:19:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1975988 ]] 00:09:08.731 06:19:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1975988 00:09:08.731 06:19:40 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1975988 ']' 00:09:08.731 06:19:40 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1975988 00:09:08.731 06:19:40 event.cpu_locks -- common/autotest_common.sh@957 -- # uname 00:09:08.731 06:19:40 event.cpu_locks -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:08.731 06:19:40 event.cpu_locks -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1975988 00:09:08.731 06:19:40 event.cpu_locks -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:09:08.731 06:19:40 event.cpu_locks -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:09:08.731 06:19:40 event.cpu_locks -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1975988' 00:09:08.731 killing process with pid 1975988 00:09:08.731 06:19:40 event.cpu_locks -- common/autotest_common.sh@971 -- # kill 1975988 00:09:08.731 06:19:40 event.cpu_locks -- common/autotest_common.sh@976 -- # wait 1975988 00:09:09.296 06:19:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:09.297 06:19:40 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:09:09.297 06:19:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1975969 ]] 00:09:09.297 06:19:40 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1975969 00:09:09.297 06:19:40 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1975969 ']' 00:09:09.297 06:19:40 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1975969 00:09:09.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1975969) - No such process 00:09:09.297 06:19:40 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1975969 is not found' 00:09:09.297 Process with pid 1975969 is not found 00:09:09.297 06:19:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1975988 ]] 00:09:09.297 06:19:40 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1975988 00:09:09.297 06:19:40 event.cpu_locks -- common/autotest_common.sh@952 -- # '[' -z 1975988 ']' 00:09:09.297 06:19:40 event.cpu_locks -- common/autotest_common.sh@956 -- # kill -0 1975988 00:09:09.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (1975988) - No such process 00:09:09.297 06:19:40 event.cpu_locks -- common/autotest_common.sh@979 -- # echo 'Process with pid 1975988 is not found' 00:09:09.297 Process with pid 1975988 is not found 00:09:09.297 06:19:40 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:09:09.297 00:09:09.297 real 0m16.045s 00:09:09.297 user 0m29.166s 00:09:09.297 sys 0m5.389s 00:09:09.297 06:19:40 event.cpu_locks -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.297 06:19:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:09:09.297 ************************************ 00:09:09.297 END TEST cpu_locks 00:09:09.297 ************************************ 00:09:09.297 00:09:09.297 real 0m40.849s 00:09:09.297 user 1m19.882s 00:09:09.297 sys 0m9.530s 00:09:09.297 06:19:40 event -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:09.297 06:19:40 event -- common/autotest_common.sh@10 -- # set +x 00:09:09.297 ************************************ 00:09:09.297 END TEST event 00:09:09.297 ************************************ 00:09:09.297 06:19:40 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:09.297 06:19:40 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:09.297 06:19:40 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.297 06:19:40 -- common/autotest_common.sh@10 -- # set +x 00:09:09.297 ************************************ 00:09:09.297 START TEST thread 00:09:09.297 ************************************ 00:09:09.297 06:19:40 thread -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:09:09.297 * Looking for test storage... 00:09:09.297 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:09:09.297 06:19:40 thread -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:09.297 06:19:40 thread -- common/autotest_common.sh@1691 -- # lcov --version 00:09:09.297 06:19:40 thread -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:09.297 06:19:41 thread -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:09.297 06:19:41 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:09.297 06:19:41 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:09.297 06:19:41 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:09.297 06:19:41 thread -- scripts/common.sh@336 -- # IFS=.-: 00:09:09.297 06:19:41 thread -- scripts/common.sh@336 -- # read -ra ver1 00:09:09.297 06:19:41 thread -- scripts/common.sh@337 -- # IFS=.-: 00:09:09.297 06:19:41 thread -- scripts/common.sh@337 -- # read -ra ver2 00:09:09.297 06:19:41 thread -- scripts/common.sh@338 -- # local 'op=<' 00:09:09.297 06:19:41 thread -- scripts/common.sh@340 -- # ver1_l=2 00:09:09.297 06:19:41 thread -- scripts/common.sh@341 -- # ver2_l=1 00:09:09.297 06:19:41 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:09.297 06:19:41 thread -- scripts/common.sh@344 -- # case "$op" in 00:09:09.297 06:19:41 thread -- scripts/common.sh@345 -- # : 1 00:09:09.297 06:19:41 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:09.297 06:19:41 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:09.297 06:19:41 thread -- scripts/common.sh@365 -- # decimal 1 00:09:09.297 06:19:41 thread -- scripts/common.sh@353 -- # local d=1 00:09:09.297 06:19:41 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:09.297 06:19:41 thread -- scripts/common.sh@355 -- # echo 1 00:09:09.297 06:19:41 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:09:09.297 06:19:41 thread -- scripts/common.sh@366 -- # decimal 2 00:09:09.297 06:19:41 thread -- scripts/common.sh@353 -- # local d=2 00:09:09.297 06:19:41 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:09.297 06:19:41 thread -- scripts/common.sh@355 -- # echo 2 00:09:09.297 06:19:41 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:09:09.297 06:19:41 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:09.297 06:19:41 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:09.297 06:19:41 thread -- scripts/common.sh@368 -- # return 0 00:09:09.297 06:19:41 thread -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:09.297 06:19:41 thread -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:09.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.297 --rc genhtml_branch_coverage=1 00:09:09.297 --rc genhtml_function_coverage=1 00:09:09.297 --rc genhtml_legend=1 00:09:09.297 --rc geninfo_all_blocks=1 00:09:09.297 --rc geninfo_unexecuted_blocks=1 00:09:09.297 00:09:09.297 ' 00:09:09.297 06:19:41 thread -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:09.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.297 --rc genhtml_branch_coverage=1 00:09:09.297 --rc genhtml_function_coverage=1 00:09:09.297 --rc genhtml_legend=1 00:09:09.297 --rc geninfo_all_blocks=1 00:09:09.297 --rc geninfo_unexecuted_blocks=1 00:09:09.297 00:09:09.297 ' 00:09:09.297 06:19:41 thread -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:09.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.297 --rc genhtml_branch_coverage=1 00:09:09.297 --rc genhtml_function_coverage=1 00:09:09.297 --rc genhtml_legend=1 00:09:09.297 --rc geninfo_all_blocks=1 00:09:09.297 --rc geninfo_unexecuted_blocks=1 00:09:09.297 00:09:09.297 ' 00:09:09.297 06:19:41 thread -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:09.297 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:09.297 --rc genhtml_branch_coverage=1 00:09:09.297 --rc genhtml_function_coverage=1 00:09:09.297 --rc genhtml_legend=1 00:09:09.297 --rc geninfo_all_blocks=1 00:09:09.297 --rc geninfo_unexecuted_blocks=1 00:09:09.297 00:09:09.297 ' 00:09:09.297 06:19:41 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:09.297 06:19:41 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:09:09.297 06:19:41 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:09.297 06:19:41 thread -- common/autotest_common.sh@10 -- # set +x 00:09:09.297 ************************************ 00:09:09.297 START TEST thread_poller_perf 00:09:09.297 ************************************ 00:09:09.297 06:19:41 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:09:09.297 [2024-11-20 06:19:41.120944] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:09.297 [2024-11-20 06:19:41.121020] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976489 ] 00:09:09.555 [2024-11-20 06:19:41.186416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:09.556 [2024-11-20 06:19:41.242023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:09.556 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:09:10.491 [2024-11-20T05:19:42.327Z] ====================================== 00:09:10.491 [2024-11-20T05:19:42.327Z] busy:2712462321 (cyc) 00:09:10.491 [2024-11-20T05:19:42.327Z] total_run_count: 362000 00:09:10.491 [2024-11-20T05:19:42.327Z] tsc_hz: 2700000000 (cyc) 00:09:10.491 [2024-11-20T05:19:42.327Z] ====================================== 00:09:10.491 [2024-11-20T05:19:42.327Z] poller_cost: 7492 (cyc), 2774 (nsec) 00:09:10.491 00:09:10.491 real 0m1.203s 00:09:10.491 user 0m1.140s 00:09:10.491 sys 0m0.058s 00:09:10.491 06:19:42 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:10.491 06:19:42 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:10.491 ************************************ 00:09:10.491 END TEST thread_poller_perf 00:09:10.491 ************************************ 00:09:10.750 06:19:42 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:10.750 06:19:42 thread -- common/autotest_common.sh@1103 -- # '[' 8 -le 1 ']' 00:09:10.750 06:19:42 thread -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:10.750 06:19:42 thread -- common/autotest_common.sh@10 -- # set +x 00:09:10.750 ************************************ 00:09:10.750 START TEST thread_poller_perf 00:09:10.750 ************************************ 00:09:10.750 06:19:42 thread.thread_poller_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:09:10.750 [2024-11-20 06:19:42.377611] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:10.750 [2024-11-20 06:19:42.377680] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976642 ] 00:09:10.750 [2024-11-20 06:19:42.444190] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.750 [2024-11-20 06:19:42.497980] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.750 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:09:12.124 [2024-11-20T05:19:43.960Z] ====================================== 00:09:12.124 [2024-11-20T05:19:43.960Z] busy:2702133129 (cyc) 00:09:12.124 [2024-11-20T05:19:43.960Z] total_run_count: 4824000 00:09:12.124 [2024-11-20T05:19:43.960Z] tsc_hz: 2700000000 (cyc) 00:09:12.124 [2024-11-20T05:19:43.960Z] ====================================== 00:09:12.124 [2024-11-20T05:19:43.960Z] poller_cost: 560 (cyc), 207 (nsec) 00:09:12.124 00:09:12.124 real 0m1.199s 00:09:12.124 user 0m1.130s 00:09:12.124 sys 0m0.064s 00:09:12.124 06:19:43 thread.thread_poller_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:12.124 06:19:43 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:09:12.124 ************************************ 00:09:12.124 END TEST thread_poller_perf 00:09:12.124 ************************************ 00:09:12.124 06:19:43 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:09:12.124 00:09:12.124 real 0m2.656s 00:09:12.124 user 0m2.408s 00:09:12.124 sys 0m0.253s 00:09:12.124 06:19:43 thread -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:12.124 06:19:43 thread -- common/autotest_common.sh@10 -- # set +x 00:09:12.124 ************************************ 00:09:12.124 END TEST thread 00:09:12.124 ************************************ 00:09:12.124 06:19:43 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:09:12.124 06:19:43 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:12.124 06:19:43 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:12.124 06:19:43 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:12.124 06:19:43 -- common/autotest_common.sh@10 -- # set +x 00:09:12.124 ************************************ 00:09:12.124 START TEST app_cmdline 00:09:12.124 ************************************ 00:09:12.124 06:19:43 app_cmdline -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:09:12.124 * Looking for test storage... 00:09:12.124 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:12.124 06:19:43 app_cmdline -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:12.124 06:19:43 app_cmdline -- common/autotest_common.sh@1691 -- # lcov --version 00:09:12.124 06:19:43 app_cmdline -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:12.124 06:19:43 app_cmdline -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@345 -- # : 1 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:09:12.124 06:19:43 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:12.125 06:19:43 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:09:12.125 06:19:43 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:09:12.125 06:19:43 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:09:12.125 06:19:43 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:09:12.125 06:19:43 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:12.125 06:19:43 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:09:12.125 06:19:43 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:09:12.125 06:19:43 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:12.125 06:19:43 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:12.125 06:19:43 app_cmdline -- scripts/common.sh@368 -- # return 0 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:12.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.125 --rc genhtml_branch_coverage=1 00:09:12.125 --rc genhtml_function_coverage=1 00:09:12.125 --rc genhtml_legend=1 00:09:12.125 --rc geninfo_all_blocks=1 00:09:12.125 --rc geninfo_unexecuted_blocks=1 00:09:12.125 00:09:12.125 ' 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:12.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.125 --rc genhtml_branch_coverage=1 00:09:12.125 --rc genhtml_function_coverage=1 00:09:12.125 --rc genhtml_legend=1 00:09:12.125 --rc geninfo_all_blocks=1 00:09:12.125 --rc geninfo_unexecuted_blocks=1 00:09:12.125 00:09:12.125 ' 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:12.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.125 --rc genhtml_branch_coverage=1 00:09:12.125 --rc genhtml_function_coverage=1 00:09:12.125 --rc genhtml_legend=1 00:09:12.125 --rc geninfo_all_blocks=1 00:09:12.125 --rc geninfo_unexecuted_blocks=1 00:09:12.125 00:09:12.125 ' 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:12.125 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:12.125 --rc genhtml_branch_coverage=1 00:09:12.125 --rc genhtml_function_coverage=1 00:09:12.125 --rc genhtml_legend=1 00:09:12.125 --rc geninfo_all_blocks=1 00:09:12.125 --rc geninfo_unexecuted_blocks=1 00:09:12.125 00:09:12.125 ' 00:09:12.125 06:19:43 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:09:12.125 06:19:43 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1976849 00:09:12.125 06:19:43 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:09:12.125 06:19:43 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1976849 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@833 -- # '[' -z 1976849 ']' 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:12.125 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:12.125 06:19:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:12.125 [2024-11-20 06:19:43.842766] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:12.125 [2024-11-20 06:19:43.842873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1976849 ] 00:09:12.125 [2024-11-20 06:19:43.911691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.383 [2024-11-20 06:19:43.970898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.642 06:19:44 app_cmdline -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:12.642 06:19:44 app_cmdline -- common/autotest_common.sh@866 -- # return 0 00:09:12.642 06:19:44 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:09:12.900 { 00:09:12.900 "version": "SPDK v25.01-pre git sha1 ecdb65a23", 00:09:12.900 "fields": { 00:09:12.900 "major": 25, 00:09:12.900 "minor": 1, 00:09:12.900 "patch": 0, 00:09:12.900 "suffix": "-pre", 00:09:12.900 "commit": "ecdb65a23" 00:09:12.900 } 00:09:12.900 } 00:09:12.900 06:19:44 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:09:12.900 06:19:44 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:09:12.900 06:19:44 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:09:12.900 06:19:44 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:09:12.900 06:19:44 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:12.900 06:19:44 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:12.900 06:19:44 app_cmdline -- app/cmdline.sh@26 -- # sort 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:12.900 06:19:44 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:09:12.900 06:19:44 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:09:12.900 06:19:44 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:12.900 06:19:44 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:09:13.158 request: 00:09:13.158 { 00:09:13.158 "method": "env_dpdk_get_mem_stats", 00:09:13.158 "req_id": 1 00:09:13.158 } 00:09:13.158 Got JSON-RPC error response 00:09:13.158 response: 00:09:13.158 { 00:09:13.158 "code": -32601, 00:09:13.158 "message": "Method not found" 00:09:13.158 } 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:09:13.158 06:19:44 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1976849 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@952 -- # '[' -z 1976849 ']' 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@956 -- # kill -0 1976849 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@957 -- # uname 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1976849 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1976849' 00:09:13.158 killing process with pid 1976849 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@971 -- # kill 1976849 00:09:13.158 06:19:44 app_cmdline -- common/autotest_common.sh@976 -- # wait 1976849 00:09:13.724 00:09:13.724 real 0m1.636s 00:09:13.724 user 0m2.068s 00:09:13.724 sys 0m0.447s 00:09:13.724 06:19:45 app_cmdline -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.724 06:19:45 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:09:13.724 ************************************ 00:09:13.724 END TEST app_cmdline 00:09:13.724 ************************************ 00:09:13.724 06:19:45 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:13.724 06:19:45 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:09:13.724 06:19:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.724 06:19:45 -- common/autotest_common.sh@10 -- # set +x 00:09:13.724 ************************************ 00:09:13.724 START TEST version 00:09:13.724 ************************************ 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:09:13.724 * Looking for test storage... 00:09:13.724 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1691 -- # lcov --version 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:13.724 06:19:45 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.724 06:19:45 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.724 06:19:45 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.724 06:19:45 version -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.724 06:19:45 version -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.724 06:19:45 version -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.724 06:19:45 version -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.724 06:19:45 version -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.724 06:19:45 version -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.724 06:19:45 version -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.724 06:19:45 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.724 06:19:45 version -- scripts/common.sh@344 -- # case "$op" in 00:09:13.724 06:19:45 version -- scripts/common.sh@345 -- # : 1 00:09:13.724 06:19:45 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.724 06:19:45 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.724 06:19:45 version -- scripts/common.sh@365 -- # decimal 1 00:09:13.724 06:19:45 version -- scripts/common.sh@353 -- # local d=1 00:09:13.724 06:19:45 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.724 06:19:45 version -- scripts/common.sh@355 -- # echo 1 00:09:13.724 06:19:45 version -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.724 06:19:45 version -- scripts/common.sh@366 -- # decimal 2 00:09:13.724 06:19:45 version -- scripts/common.sh@353 -- # local d=2 00:09:13.724 06:19:45 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.724 06:19:45 version -- scripts/common.sh@355 -- # echo 2 00:09:13.724 06:19:45 version -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.724 06:19:45 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.724 06:19:45 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.724 06:19:45 version -- scripts/common.sh@368 -- # return 0 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.724 --rc genhtml_branch_coverage=1 00:09:13.724 --rc genhtml_function_coverage=1 00:09:13.724 --rc genhtml_legend=1 00:09:13.724 --rc geninfo_all_blocks=1 00:09:13.724 --rc geninfo_unexecuted_blocks=1 00:09:13.724 00:09:13.724 ' 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.724 --rc genhtml_branch_coverage=1 00:09:13.724 --rc genhtml_function_coverage=1 00:09:13.724 --rc genhtml_legend=1 00:09:13.724 --rc geninfo_all_blocks=1 00:09:13.724 --rc geninfo_unexecuted_blocks=1 00:09:13.724 00:09:13.724 ' 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.724 --rc genhtml_branch_coverage=1 00:09:13.724 --rc genhtml_function_coverage=1 00:09:13.724 --rc genhtml_legend=1 00:09:13.724 --rc geninfo_all_blocks=1 00:09:13.724 --rc geninfo_unexecuted_blocks=1 00:09:13.724 00:09:13.724 ' 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:13.724 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.724 --rc genhtml_branch_coverage=1 00:09:13.724 --rc genhtml_function_coverage=1 00:09:13.724 --rc genhtml_legend=1 00:09:13.724 --rc geninfo_all_blocks=1 00:09:13.724 --rc geninfo_unexecuted_blocks=1 00:09:13.724 00:09:13.724 ' 00:09:13.724 06:19:45 version -- app/version.sh@17 -- # get_header_version major 00:09:13.724 06:19:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:13.724 06:19:45 version -- app/version.sh@14 -- # cut -f2 00:09:13.724 06:19:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.724 06:19:45 version -- app/version.sh@17 -- # major=25 00:09:13.724 06:19:45 version -- app/version.sh@18 -- # get_header_version minor 00:09:13.724 06:19:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:13.724 06:19:45 version -- app/version.sh@14 -- # cut -f2 00:09:13.724 06:19:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.724 06:19:45 version -- app/version.sh@18 -- # minor=1 00:09:13.724 06:19:45 version -- app/version.sh@19 -- # get_header_version patch 00:09:13.724 06:19:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:13.724 06:19:45 version -- app/version.sh@14 -- # cut -f2 00:09:13.724 06:19:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.724 06:19:45 version -- app/version.sh@19 -- # patch=0 00:09:13.724 06:19:45 version -- app/version.sh@20 -- # get_header_version suffix 00:09:13.724 06:19:45 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:09:13.724 06:19:45 version -- app/version.sh@14 -- # cut -f2 00:09:13.724 06:19:45 version -- app/version.sh@14 -- # tr -d '"' 00:09:13.724 06:19:45 version -- app/version.sh@20 -- # suffix=-pre 00:09:13.724 06:19:45 version -- app/version.sh@22 -- # version=25.1 00:09:13.724 06:19:45 version -- app/version.sh@25 -- # (( patch != 0 )) 00:09:13.724 06:19:45 version -- app/version.sh@28 -- # version=25.1rc0 00:09:13.724 06:19:45 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:13.724 06:19:45 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:09:13.724 06:19:45 version -- app/version.sh@30 -- # py_version=25.1rc0 00:09:13.724 06:19:45 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:09:13.724 00:09:13.724 real 0m0.204s 00:09:13.724 user 0m0.133s 00:09:13.724 sys 0m0.097s 00:09:13.724 06:19:45 version -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:13.724 06:19:45 version -- common/autotest_common.sh@10 -- # set +x 00:09:13.724 ************************************ 00:09:13.724 END TEST version 00:09:13.724 ************************************ 00:09:13.724 06:19:45 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:09:13.724 06:19:45 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:09:13.724 06:19:45 -- spdk/autotest.sh@194 -- # uname -s 00:09:13.724 06:19:45 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:09:13.724 06:19:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:13.724 06:19:45 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:09:13.983 06:19:45 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:09:13.983 06:19:45 -- spdk/autotest.sh@252 -- # '[' 0 -eq 1 ']' 00:09:13.983 06:19:45 -- spdk/autotest.sh@256 -- # timing_exit lib 00:09:13.983 06:19:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:13.983 06:19:45 -- common/autotest_common.sh@10 -- # set +x 00:09:13.983 06:19:45 -- spdk/autotest.sh@258 -- # '[' 0 -eq 1 ']' 00:09:13.983 06:19:45 -- spdk/autotest.sh@263 -- # '[' 0 -eq 1 ']' 00:09:13.983 06:19:45 -- spdk/autotest.sh@272 -- # '[' 1 -eq 1 ']' 00:09:13.983 06:19:45 -- spdk/autotest.sh@273 -- # export NET_TYPE 00:09:13.983 06:19:45 -- spdk/autotest.sh@276 -- # '[' tcp = rdma ']' 00:09:13.983 06:19:45 -- spdk/autotest.sh@279 -- # '[' tcp = tcp ']' 00:09:13.983 06:19:45 -- spdk/autotest.sh@280 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:13.983 06:19:45 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:13.983 06:19:45 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.983 06:19:45 -- common/autotest_common.sh@10 -- # set +x 00:09:13.983 ************************************ 00:09:13.983 START TEST nvmf_tcp 00:09:13.983 ************************************ 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:09:13.983 * Looking for test storage... 00:09:13.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:13.983 06:19:45 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:13.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.983 --rc genhtml_branch_coverage=1 00:09:13.983 --rc genhtml_function_coverage=1 00:09:13.983 --rc genhtml_legend=1 00:09:13.983 --rc geninfo_all_blocks=1 00:09:13.983 --rc geninfo_unexecuted_blocks=1 00:09:13.983 00:09:13.983 ' 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:13.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.983 --rc genhtml_branch_coverage=1 00:09:13.983 --rc genhtml_function_coverage=1 00:09:13.983 --rc genhtml_legend=1 00:09:13.983 --rc geninfo_all_blocks=1 00:09:13.983 --rc geninfo_unexecuted_blocks=1 00:09:13.983 00:09:13.983 ' 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:13.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.983 --rc genhtml_branch_coverage=1 00:09:13.983 --rc genhtml_function_coverage=1 00:09:13.983 --rc genhtml_legend=1 00:09:13.983 --rc geninfo_all_blocks=1 00:09:13.983 --rc geninfo_unexecuted_blocks=1 00:09:13.983 00:09:13.983 ' 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:13.983 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:13.983 --rc genhtml_branch_coverage=1 00:09:13.983 --rc genhtml_function_coverage=1 00:09:13.983 --rc genhtml_legend=1 00:09:13.983 --rc geninfo_all_blocks=1 00:09:13.983 --rc geninfo_unexecuted_blocks=1 00:09:13.983 00:09:13.983 ' 00:09:13.983 06:19:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:09:13.983 06:19:45 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:13.983 06:19:45 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:13.983 06:19:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:13.983 ************************************ 00:09:13.983 START TEST nvmf_target_core 00:09:13.983 ************************************ 00:09:13.983 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:09:13.983 * Looking for test storage... 00:09:13.983 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:13.983 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:13.983 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lcov --version 00:09:13.983 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:14.242 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:14.242 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.242 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.242 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.242 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.242 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.242 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.242 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.242 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.242 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:14.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.243 --rc genhtml_branch_coverage=1 00:09:14.243 --rc genhtml_function_coverage=1 00:09:14.243 --rc genhtml_legend=1 00:09:14.243 --rc geninfo_all_blocks=1 00:09:14.243 --rc geninfo_unexecuted_blocks=1 00:09:14.243 00:09:14.243 ' 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:14.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.243 --rc genhtml_branch_coverage=1 00:09:14.243 --rc genhtml_function_coverage=1 00:09:14.243 --rc genhtml_legend=1 00:09:14.243 --rc geninfo_all_blocks=1 00:09:14.243 --rc geninfo_unexecuted_blocks=1 00:09:14.243 00:09:14.243 ' 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:14.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.243 --rc genhtml_branch_coverage=1 00:09:14.243 --rc genhtml_function_coverage=1 00:09:14.243 --rc genhtml_legend=1 00:09:14.243 --rc geninfo_all_blocks=1 00:09:14.243 --rc geninfo_unexecuted_blocks=1 00:09:14.243 00:09:14.243 ' 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:14.243 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.243 --rc genhtml_branch_coverage=1 00:09:14.243 --rc genhtml_function_coverage=1 00:09:14.243 --rc genhtml_legend=1 00:09:14.243 --rc geninfo_all_blocks=1 00:09:14.243 --rc geninfo_unexecuted_blocks=1 00:09:14.243 00:09:14.243 ' 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.243 06:19:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.244 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:14.244 ************************************ 00:09:14.244 START TEST nvmf_abort 00:09:14.244 ************************************ 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:09:14.244 * Looking for test storage... 00:09:14.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:09:14.244 06:19:45 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:14.244 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:14.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.504 --rc genhtml_branch_coverage=1 00:09:14.504 --rc genhtml_function_coverage=1 00:09:14.504 --rc genhtml_legend=1 00:09:14.504 --rc geninfo_all_blocks=1 00:09:14.504 --rc geninfo_unexecuted_blocks=1 00:09:14.504 00:09:14.504 ' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:14.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.504 --rc genhtml_branch_coverage=1 00:09:14.504 --rc genhtml_function_coverage=1 00:09:14.504 --rc genhtml_legend=1 00:09:14.504 --rc geninfo_all_blocks=1 00:09:14.504 --rc geninfo_unexecuted_blocks=1 00:09:14.504 00:09:14.504 ' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:14.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.504 --rc genhtml_branch_coverage=1 00:09:14.504 --rc genhtml_function_coverage=1 00:09:14.504 --rc genhtml_legend=1 00:09:14.504 --rc geninfo_all_blocks=1 00:09:14.504 --rc geninfo_unexecuted_blocks=1 00:09:14.504 00:09:14.504 ' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:14.504 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:14.504 --rc genhtml_branch_coverage=1 00:09:14.504 --rc genhtml_function_coverage=1 00:09:14.504 --rc genhtml_legend=1 00:09:14.504 --rc geninfo_all_blocks=1 00:09:14.504 --rc geninfo_unexecuted_blocks=1 00:09:14.504 00:09:14.504 ' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:14.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:09:14.504 06:19:46 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.040 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:17.041 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:17.041 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:17.041 Found net devices under 0000:09:00.0: cvl_0_0 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:17.041 Found net devices under 0000:09:00.1: cvl_0_1 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.041 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.041 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:09:17.041 00:09:17.041 --- 10.0.0.2 ping statistics --- 00:09:17.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.041 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.041 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.041 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:09:17.041 00:09:17.041 --- 10.0.0.1 ping statistics --- 00:09:17.041 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.041 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=1978942 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 1978942 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 1978942 ']' 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.041 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.042 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.042 [2024-11-20 06:19:48.512906] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:17.042 [2024-11-20 06:19:48.512993] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.042 [2024-11-20 06:19:48.586278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:17.042 [2024-11-20 06:19:48.647134] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.042 [2024-11-20 06:19:48.647184] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.042 [2024-11-20 06:19:48.647198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.042 [2024-11-20 06:19:48.647208] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.042 [2024-11-20 06:19:48.647218] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.042 [2024-11-20 06:19:48.648732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:17.042 [2024-11-20 06:19:48.648785] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:17.042 [2024-11-20 06:19:48.648789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.042 [2024-11-20 06:19:48.801985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.042 Malloc0 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.042 Delay0 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.042 [2024-11-20 06:19:48.868194] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:17.042 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:17.301 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:17.301 06:19:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:09:17.301 [2024-11-20 06:19:48.983077] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:19.201 Initializing NVMe Controllers 00:09:19.201 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:19.201 controller IO queue size 128 less than required 00:09:19.201 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:09:19.201 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:09:19.201 Initialization complete. Launching workers. 00:09:19.201 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 26419 00:09:19.201 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 26480, failed to submit 62 00:09:19.201 success 26423, unsuccessful 57, failed 0 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:19.201 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:19.459 rmmod nvme_tcp 00:09:19.459 rmmod nvme_fabrics 00:09:19.459 rmmod nvme_keyring 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 1978942 ']' 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 1978942 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 1978942 ']' 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 1978942 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1978942 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1978942' 00:09:19.459 killing process with pid 1978942 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@971 -- # kill 1978942 00:09:19.459 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@976 -- # wait 1978942 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.718 06:19:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.624 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:21.624 00:09:21.624 real 0m7.487s 00:09:21.624 user 0m10.680s 00:09:21.624 sys 0m2.587s 00:09:21.624 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:09:21.624 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:09:21.624 ************************************ 00:09:21.624 END TEST nvmf_abort 00:09:21.624 ************************************ 00:09:21.624 06:19:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:21.624 06:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:09:21.624 06:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:09:21.624 06:19:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:21.882 ************************************ 00:09:21.882 START TEST nvmf_ns_hotplug_stress 00:09:21.882 ************************************ 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:09:21.882 * Looking for test storage... 00:09:21.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:09:21.882 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:09:21.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.883 --rc genhtml_branch_coverage=1 00:09:21.883 --rc genhtml_function_coverage=1 00:09:21.883 --rc genhtml_legend=1 00:09:21.883 --rc geninfo_all_blocks=1 00:09:21.883 --rc geninfo_unexecuted_blocks=1 00:09:21.883 00:09:21.883 ' 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:09:21.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.883 --rc genhtml_branch_coverage=1 00:09:21.883 --rc genhtml_function_coverage=1 00:09:21.883 --rc genhtml_legend=1 00:09:21.883 --rc geninfo_all_blocks=1 00:09:21.883 --rc geninfo_unexecuted_blocks=1 00:09:21.883 00:09:21.883 ' 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:09:21.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.883 --rc genhtml_branch_coverage=1 00:09:21.883 --rc genhtml_function_coverage=1 00:09:21.883 --rc genhtml_legend=1 00:09:21.883 --rc geninfo_all_blocks=1 00:09:21.883 --rc geninfo_unexecuted_blocks=1 00:09:21.883 00:09:21.883 ' 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:09:21.883 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:21.883 --rc genhtml_branch_coverage=1 00:09:21.883 --rc genhtml_function_coverage=1 00:09:21.883 --rc genhtml_legend=1 00:09:21.883 --rc geninfo_all_blocks=1 00:09:21.883 --rc geninfo_unexecuted_blocks=1 00:09:21.883 00:09:21.883 ' 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.883 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:21.884 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:09:21.884 06:19:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:24.416 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:09:24.417 Found 0000:09:00.0 (0x8086 - 0x159b) 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:09:24.417 Found 0000:09:00.1 (0x8086 - 0x159b) 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:09:24.417 Found net devices under 0000:09:00.0: cvl_0_0 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:09:24.417 Found net devices under 0000:09:00.1: cvl_0_1 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:24.417 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.417 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:09:24.417 00:09:24.417 --- 10.0.0.2 ping statistics --- 00:09:24.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.417 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.417 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.417 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:09:24.417 00:09:24.417 --- 10.0.0.1 ping statistics --- 00:09:24.417 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.417 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=1981307 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 1981307 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 1981307 ']' 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:09:24.417 06:19:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.417 [2024-11-20 06:19:56.027554] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:09:24.418 [2024-11-20 06:19:56.027637] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.418 [2024-11-20 06:19:56.099130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:24.418 [2024-11-20 06:19:56.158971] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.418 [2024-11-20 06:19:56.159024] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.418 [2024-11-20 06:19:56.159037] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.418 [2024-11-20 06:19:56.159048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.418 [2024-11-20 06:19:56.159058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.418 [2024-11-20 06:19:56.160707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:24.418 [2024-11-20 06:19:56.160740] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.418 [2024-11-20 06:19:56.160743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:24.676 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:09:24.676 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:09:24.676 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:24.676 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.676 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:09:24.676 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.676 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:09:24.676 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:24.933 [2024-11-20 06:19:56.560504] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.933 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:25.191 06:19:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.449 [2024-11-20 06:19:57.115463] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.449 06:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:25.707 06:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:09:25.965 Malloc0 00:09:25.965 06:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:26.223 Delay0 00:09:26.223 06:19:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:26.480 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:09:26.738 NULL1 00:09:26.738 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:09:26.995 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1981607 00:09:26.995 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:09:26.995 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:26.995 06:19:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:27.253 06:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:27.513 06:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:09:27.513 06:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:09:27.771 true 00:09:27.771 06:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:27.771 06:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:28.028 06:19:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:28.286 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:09:28.286 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:09:28.543 true 00:09:28.543 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:28.543 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:29.109 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:29.109 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:09:29.109 06:20:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:09:29.367 true 00:09:29.367 06:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:29.367 06:20:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:30.299 Read completed with error (sct=0, sc=11) 00:09:30.299 06:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:30.299 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:30.557 06:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:09:30.557 06:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:09:30.815 true 00:09:31.072 06:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:31.072 06:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:31.330 06:20:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:31.587 06:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:09:31.587 06:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:09:31.587 true 00:09:31.845 06:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:31.845 06:20:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:32.779 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:32.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.779 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:32.779 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:09:32.779 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:09:33.036 true 00:09:33.036 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:33.037 06:20:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:33.599 06:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:33.599 06:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:09:33.599 06:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:09:33.857 true 00:09:33.857 06:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:33.857 06:20:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:34.840 06:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:34.840 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:35.097 06:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:09:35.097 06:20:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:09:35.355 true 00:09:35.355 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:35.355 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:35.613 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:35.870 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:09:35.870 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:09:36.128 true 00:09:36.128 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:36.128 06:20:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.060 06:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:37.060 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:37.318 06:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:09:37.318 06:20:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:09:37.575 true 00:09:37.575 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:37.575 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:37.833 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.091 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:09:38.091 06:20:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:09:38.348 true 00:09:38.348 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:38.348 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:38.606 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:38.863 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:09:38.863 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:09:39.121 true 00:09:39.121 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:39.121 06:20:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.053 06:20:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:40.053 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:40.311 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:09:40.311 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:09:40.568 true 00:09:40.568 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:40.568 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:40.826 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:41.084 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:09:41.084 06:20:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:09:41.341 true 00:09:41.341 06:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:41.342 06:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:42.275 06:20:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:42.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.275 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:42.534 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:09:42.534 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:09:42.792 true 00:09:42.792 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:42.792 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.049 06:20:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:43.307 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:09:43.307 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:09:43.589 true 00:09:43.589 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:43.589 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:43.846 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:44.104 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:09:44.104 06:20:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:09:44.361 true 00:09:44.361 06:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:44.361 06:20:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:45.735 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:45.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.735 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:45.735 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:09:45.735 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:09:45.992 true 00:09:45.992 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:45.992 06:20:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:46.250 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:46.508 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:09:46.508 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:09:46.765 true 00:09:46.765 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:46.765 06:20:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:47.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.698 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:47.698 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:09:47.955 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:09:47.955 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:09:48.213 true 00:09:48.213 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:48.213 06:20:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:48.470 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:48.727 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:09:48.728 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:09:48.985 true 00:09:48.985 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:48.985 06:20:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:49.918 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.175 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:09:50.175 06:20:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:09:50.433 true 00:09:50.433 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:50.433 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:50.690 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:50.948 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:09:50.948 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:09:51.212 true 00:09:51.212 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:51.212 06:20:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:51.472 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:51.730 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:09:51.730 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:09:51.988 true 00:09:51.988 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:51.988 06:20:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:52.919 06:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.177 06:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:09:53.177 06:20:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:09:53.435 true 00:09:53.435 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:53.435 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:53.693 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:53.950 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:09:53.950 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:09:54.207 true 00:09:54.207 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:54.207 06:20:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:54.465 06:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:54.723 06:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:09:54.723 06:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:09:54.980 true 00:09:54.980 06:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:54.980 06:20:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:55.913 06:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:56.170 06:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:09:56.170 06:20:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:09:56.427 true 00:09:56.427 06:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:56.427 06:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:56.735 06:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.050 06:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:09:57.050 06:20:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:09:57.308 true 00:09:57.308 06:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:57.308 06:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:57.308 Initializing NVMe Controllers 00:09:57.308 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:57.308 Controller IO queue size 128, less than required. 00:09:57.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:57.308 Controller IO queue size 128, less than required. 00:09:57.308 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:57.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:57.308 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:09:57.308 Initialization complete. Launching workers. 00:09:57.308 ======================================================== 00:09:57.308 Latency(us) 00:09:57.308 Device Information : IOPS MiB/s Average min max 00:09:57.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 498.26 0.24 104666.31 2168.23 1023606.40 00:09:57.308 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8355.87 4.08 15272.02 2421.68 451218.45 00:09:57.308 ======================================================== 00:09:57.308 Total : 8854.13 4.32 20302.58 2168.23 1023606.40 00:09:57.308 00:09:57.565 06:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:09:57.822 06:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:09:57.822 06:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:09:58.080 true 00:09:58.080 06:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1981607 00:09:58.080 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1981607) - No such process 00:09:58.080 06:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1981607 00:09:58.080 06:20:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.337 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:09:58.594 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:09:58.594 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:09:58.594 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:09:58.594 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:58.594 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:09:58.850 null0 00:09:58.850 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:58.850 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:58.850 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:09:59.107 null1 00:09:59.107 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:59.107 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:59.107 06:20:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:09:59.364 null2 00:09:59.364 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:59.364 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:59.364 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:09:59.621 null3 00:09:59.879 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:09:59.879 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:09:59.879 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:09:59.879 null4 00:10:00.136 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.136 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.136 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:10:00.136 null5 00:10:00.395 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.395 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.395 06:20:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:10:00.652 null6 00:10:00.652 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.652 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.652 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:10:00.910 null7 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:00.910 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1985809 1985810 1985812 1985814 1985816 1985818 1985820 1985822 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:00.911 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.169 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:01.169 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:01.169 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:01.169 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.169 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.169 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:01.169 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.169 06:20:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.427 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:01.685 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:01.685 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:01.685 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.685 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:01.685 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:01.685 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:01.685 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:01.685 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:01.943 06:20:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.509 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:02.509 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:02.509 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:02.509 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:02.509 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:02.509 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:02.509 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:02.509 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:02.767 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.026 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.026 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.026 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.026 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.026 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.026 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.026 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.026 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.284 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.285 06:20:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.542 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:03.542 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:03.542 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:03.542 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:03.543 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:03.543 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:03.543 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:03.543 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:03.801 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.059 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.059 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.059 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.318 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.318 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.318 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.318 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.318 06:20:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:04.577 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:04.834 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:04.834 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:04.834 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:04.835 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:04.835 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:04.835 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:04.835 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:04.835 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.093 06:20:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.351 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:05.351 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.351 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:05.351 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:05.351 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:05.351 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:05.351 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:05.351 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:05.609 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:05.867 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:05.867 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.125 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.125 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.125 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.125 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.125 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.125 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.383 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.383 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.383 06:20:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.383 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:10:06.641 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:10:06.641 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:10:06.641 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:10:06.641 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:10:06.642 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:10:06.642 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:06.642 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:10:06.642 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:06.900 rmmod nvme_tcp 00:10:06.900 rmmod nvme_fabrics 00:10:06.900 rmmod nvme_keyring 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 1981307 ']' 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 1981307 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 1981307 ']' 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 1981307 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:06.900 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1981307 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1981307' 00:10:07.159 killing process with pid 1981307 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 1981307 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 1981307 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:07.159 06:20:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.699 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:09.699 00:10:09.700 real 0m47.549s 00:10:09.700 user 3m41.065s 00:10:09.700 sys 0m16.308s 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:10:09.700 ************************************ 00:10:09.700 END TEST nvmf_ns_hotplug_stress 00:10:09.700 ************************************ 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:09.700 ************************************ 00:10:09.700 START TEST nvmf_delete_subsystem 00:10:09.700 ************************************ 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:10:09.700 * Looking for test storage... 00:10:09.700 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:09.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.700 --rc genhtml_branch_coverage=1 00:10:09.700 --rc genhtml_function_coverage=1 00:10:09.700 --rc genhtml_legend=1 00:10:09.700 --rc geninfo_all_blocks=1 00:10:09.700 --rc geninfo_unexecuted_blocks=1 00:10:09.700 00:10:09.700 ' 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:09.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.700 --rc genhtml_branch_coverage=1 00:10:09.700 --rc genhtml_function_coverage=1 00:10:09.700 --rc genhtml_legend=1 00:10:09.700 --rc geninfo_all_blocks=1 00:10:09.700 --rc geninfo_unexecuted_blocks=1 00:10:09.700 00:10:09.700 ' 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:09.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.700 --rc genhtml_branch_coverage=1 00:10:09.700 --rc genhtml_function_coverage=1 00:10:09.700 --rc genhtml_legend=1 00:10:09.700 --rc geninfo_all_blocks=1 00:10:09.700 --rc geninfo_unexecuted_blocks=1 00:10:09.700 00:10:09.700 ' 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:09.700 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:09.700 --rc genhtml_branch_coverage=1 00:10:09.700 --rc genhtml_function_coverage=1 00:10:09.700 --rc genhtml_legend=1 00:10:09.700 --rc geninfo_all_blocks=1 00:10:09.700 --rc geninfo_unexecuted_blocks=1 00:10:09.700 00:10:09.700 ' 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:09.700 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:09.701 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:10:09.701 06:20:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:12.239 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:12.239 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:12.239 Found net devices under 0000:09:00.0: cvl_0_0 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:12.239 Found net devices under 0000:09:00.1: cvl_0_1 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:12.239 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:12.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:12.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.339 ms 00:10:12.240 00:10:12.240 --- 10.0.0.2 ping statistics --- 00:10:12.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.240 rtt min/avg/max/mdev = 0.339/0.339/0.339/0.000 ms 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:12.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:12.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:10:12.240 00:10:12.240 --- 10.0.0.1 ping statistics --- 00:10:12.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:12.240 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=1988618 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 1988618 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 1988618 ']' 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:12.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.240 [2024-11-20 06:20:43.673436] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:10:12.240 [2024-11-20 06:20:43.673515] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:12.240 [2024-11-20 06:20:43.746927] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:10:12.240 [2024-11-20 06:20:43.803415] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:12.240 [2024-11-20 06:20:43.803464] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:12.240 [2024-11-20 06:20:43.803488] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:12.240 [2024-11-20 06:20:43.803500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:12.240 [2024-11-20 06:20:43.803509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:12.240 [2024-11-20 06:20:43.806323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:12.240 [2024-11-20 06:20:43.806334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.240 [2024-11-20 06:20:43.957082] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.240 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.240 [2024-11-20 06:20:43.973321] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.241 NULL1 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.241 Delay0 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:12.241 06:20:43 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.241 06:20:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:12.241 06:20:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1988742 00:10:12.241 06:20:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:10:12.241 06:20:44 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:12.241 [2024-11-20 06:20:44.068144] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:14.191 06:20:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.191 06:20:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:14.191 06:20:46 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 [2024-11-20 06:20:46.149015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2056860 is same with the state(6) to be set 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 starting I/O failed: -6 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Write completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.449 Read completed with error (sct=0, sc=8) 00:10:14.450 starting I/O failed: -6 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 starting I/O failed: -6 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 starting I/O failed: -6 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 starting I/O failed: -6 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 starting I/O failed: -6 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 starting I/O failed: -6 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 starting I/O failed: -6 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 starting I/O failed: -6 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 [2024-11-20 06:20:46.149800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb930000c40 is same with the state(6) to be set 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 [2024-11-20 06:20:46.150243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20564a0 is same with the state(6) to be set 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Write completed with error (sct=0, sc=8) 00:10:14.450 Read completed with error (sct=0, sc=8) 00:10:15.384 [2024-11-20 06:20:47.121986] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20579a0 is same with the state(6) to be set 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 [2024-11-20 06:20:47.151086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb93000d7e0 is same with the state(6) to be set 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 [2024-11-20 06:20:47.152795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fb93000d350 is same with the state(6) to be set 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Read completed with error (sct=0, sc=8) 00:10:15.384 Write completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 [2024-11-20 06:20:47.153191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20562c0 is same with the state(6) to be set 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Write completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 Read completed with error (sct=0, sc=8) 00:10:15.385 [2024-11-20 06:20:47.153445] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2056680 is same with the state(6) to be set 00:10:15.385 Initializing NVMe Controllers 00:10:15.385 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:15.385 Controller IO queue size 128, less than required. 00:10:15.385 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:15.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:15.385 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:15.385 Initialization complete. Launching workers. 00:10:15.385 ======================================================== 00:10:15.385 Latency(us) 00:10:15.385 Device Information : IOPS MiB/s Average min max 00:10:15.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.41 0.08 993358.51 1261.33 2005041.16 00:10:15.385 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 166.86 0.08 916163.36 496.18 2003397.46 00:10:15.385 ======================================================== 00:10:15.385 Total : 325.27 0.16 953759.16 496.18 2005041.16 00:10:15.385 00:10:15.385 [2024-11-20 06:20:47.154182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20579a0 (9): Bad file descriptor 00:10:15.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:10:15.385 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.385 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:10:15.385 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1988742 00:10:15.385 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1988742 00:10:15.950 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1988742) - No such process 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1988742 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 1988742 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 1988742 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:15.950 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.951 [2024-11-20 06:20:47.677572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1989150 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989150 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:15.951 06:20:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:10:15.951 [2024-11-20 06:20:47.749512] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:10:16.516 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:16.516 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989150 00:10:16.516 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:17.081 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:17.081 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989150 00:10:17.081 06:20:48 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:17.646 06:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:17.646 06:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989150 00:10:17.646 06:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:17.903 06:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:17.903 06:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989150 00:10:17.903 06:20:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:18.467 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:18.467 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989150 00:10:18.467 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:19.032 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:19.032 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989150 00:10:19.032 06:20:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:10:19.290 Initializing NVMe Controllers 00:10:19.290 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:19.290 Controller IO queue size 128, less than required. 00:10:19.290 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:19.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:10:19.290 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:10:19.290 Initialization complete. Launching workers. 00:10:19.290 ======================================================== 00:10:19.290 Latency(us) 00:10:19.290 Device Information : IOPS MiB/s Average min max 00:10:19.290 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004456.05 1000193.68 1010918.01 00:10:19.290 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004390.01 1000197.08 1013321.41 00:10:19.290 ======================================================== 00:10:19.290 Total : 256.00 0.12 1004423.03 1000193.68 1013321.41 00:10:19.290 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1989150 00:10:19.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1989150) - No such process 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1989150 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:19.548 rmmod nvme_tcp 00:10:19.548 rmmod nvme_fabrics 00:10:19.548 rmmod nvme_keyring 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 1988618 ']' 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 1988618 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 1988618 ']' 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 1988618 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1988618 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1988618' 00:10:19.548 killing process with pid 1988618 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 1988618 00:10:19.548 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 1988618 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.806 06:20:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:22.347 00:10:22.347 real 0m12.496s 00:10:22.347 user 0m27.873s 00:10:22.347 sys 0m3.080s 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:10:22.347 ************************************ 00:10:22.347 END TEST nvmf_delete_subsystem 00:10:22.347 ************************************ 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:22.347 ************************************ 00:10:22.347 START TEST nvmf_host_management 00:10:22.347 ************************************ 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:10:22.347 * Looking for test storage... 00:10:22.347 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:10:22.347 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.348 --rc genhtml_branch_coverage=1 00:10:22.348 --rc genhtml_function_coverage=1 00:10:22.348 --rc genhtml_legend=1 00:10:22.348 --rc geninfo_all_blocks=1 00:10:22.348 --rc geninfo_unexecuted_blocks=1 00:10:22.348 00:10:22.348 ' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.348 --rc genhtml_branch_coverage=1 00:10:22.348 --rc genhtml_function_coverage=1 00:10:22.348 --rc genhtml_legend=1 00:10:22.348 --rc geninfo_all_blocks=1 00:10:22.348 --rc geninfo_unexecuted_blocks=1 00:10:22.348 00:10:22.348 ' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.348 --rc genhtml_branch_coverage=1 00:10:22.348 --rc genhtml_function_coverage=1 00:10:22.348 --rc genhtml_legend=1 00:10:22.348 --rc geninfo_all_blocks=1 00:10:22.348 --rc geninfo_unexecuted_blocks=1 00:10:22.348 00:10:22.348 ' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:22.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:22.348 --rc genhtml_branch_coverage=1 00:10:22.348 --rc genhtml_function_coverage=1 00:10:22.348 --rc genhtml_legend=1 00:10:22.348 --rc geninfo_all_blocks=1 00:10:22.348 --rc geninfo_unexecuted_blocks=1 00:10:22.348 00:10:22.348 ' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:22.348 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:10:22.348 06:20:53 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:24.250 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:24.251 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:24.251 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:24.251 Found net devices under 0000:09:00.0: cvl_0_0 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:24.251 Found net devices under 0000:09:00.1: cvl_0_1 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.251 06:20:55 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.251 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.251 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.251 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:24.251 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.251 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.251 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:24.251 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:24.251 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:24.521 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.521 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.312 ms 00:10:24.521 00:10:24.521 --- 10.0.0.2 ping statistics --- 00:10:24.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.521 rtt min/avg/max/mdev = 0.312/0.312/0.312/0.000 ms 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:24.521 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.521 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:10:24.521 00:10:24.521 --- 10.0.0.1 ping statistics --- 00:10:24.521 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.521 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=1991620 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 1991620 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1991620 ']' 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:24.521 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.521 [2024-11-20 06:20:56.170401] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:10:24.521 [2024-11-20 06:20:56.170497] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.521 [2024-11-20 06:20:56.243502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.521 [2024-11-20 06:20:56.303740] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.521 [2024-11-20 06:20:56.303786] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.521 [2024-11-20 06:20:56.303810] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.521 [2024-11-20 06:20:56.303835] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.521 [2024-11-20 06:20:56.303844] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.521 [2024-11-20 06:20:56.305455] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.521 [2024-11-20 06:20:56.305507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.521 [2024-11-20 06:20:56.305557] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:24.521 [2024-11-20 06:20:56.305561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.852 [2024-11-20 06:20:56.455251] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.852 Malloc0 00:10:24.852 [2024-11-20 06:20:56.531359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1991672 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1991672 /var/tmp/bdevperf.sock 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 1991672 ']' 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:24.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:24.852 { 00:10:24.852 "params": { 00:10:24.852 "name": "Nvme$subsystem", 00:10:24.852 "trtype": "$TEST_TRANSPORT", 00:10:24.852 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:24.852 "adrfam": "ipv4", 00:10:24.852 "trsvcid": "$NVMF_PORT", 00:10:24.852 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:24.852 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:24.852 "hdgst": ${hdgst:-false}, 00:10:24.852 "ddgst": ${ddgst:-false} 00:10:24.852 }, 00:10:24.852 "method": "bdev_nvme_attach_controller" 00:10:24.852 } 00:10:24.852 EOF 00:10:24.852 )") 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:24.852 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:24.852 "params": { 00:10:24.852 "name": "Nvme0", 00:10:24.852 "trtype": "tcp", 00:10:24.852 "traddr": "10.0.0.2", 00:10:24.852 "adrfam": "ipv4", 00:10:24.852 "trsvcid": "4420", 00:10:24.852 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:24.852 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:24.852 "hdgst": false, 00:10:24.852 "ddgst": false 00:10:24.852 }, 00:10:24.852 "method": "bdev_nvme_attach_controller" 00:10:24.852 }' 00:10:24.852 [2024-11-20 06:20:56.616809] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:10:24.852 [2024-11-20 06:20:56.616889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991672 ] 00:10:25.109 [2024-11-20 06:20:56.687363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:25.109 [2024-11-20 06:20:56.747637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.109 Running I/O for 10 seconds... 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.367 06:20:56 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:25.367 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.367 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:10:25.367 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:10:25.367 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=542 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 542 -ge 100 ']' 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:25.627 [2024-11-20 06:20:57.329929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330109] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330262] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330273] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330394] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330430] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.330579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b45f10 is same with the state(6) to be set 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:25.627 06:20:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:10:25.627 [2024-11-20 06:20:57.345422] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.627 [2024-11-20 06:20:57.345463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.627 [2024-11-20 06:20:57.345483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.627 [2024-11-20 06:20:57.345497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.627 [2024-11-20 06:20:57.345512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.627 [2024-11-20 06:20:57.345527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.627 [2024-11-20 06:20:57.345541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.627 [2024-11-20 06:20:57.345554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.627 [2024-11-20 06:20:57.345568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1770a40 is same with the state(6) to be set 00:10:25.627 [2024-11-20 06:20:57.345683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.627 [2024-11-20 06:20:57.345706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.627 [2024-11-20 06:20:57.345731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.627 [2024-11-20 06:20:57.345747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.627 [2024-11-20 06:20:57.345765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.627 [2024-11-20 06:20:57.345795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.345811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.345825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.345846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.345862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.345877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.345892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.345908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.345921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.345937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.345951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.345966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.345980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.345995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.346972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.346988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.628 [2024-11-20 06:20:57.347001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.628 [2024-11-20 06:20:57.347016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.347728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:89856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:10:25.629 [2024-11-20 06:20:57.347742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.629 [2024-11-20 06:20:57.348944] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:10:25.629 task offset: 81792 on job bdev=Nvme0n1 fails 00:10:25.629 00:10:25.629 Latency(us) 00:10:25.629 [2024-11-20T05:20:57.465Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:25.629 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:25.629 Job: Nvme0n1 ended in about 0.41 seconds with error 00:10:25.629 Verification LBA range: start 0x0 length 0x400 00:10:25.629 Nvme0n1 : 0.41 1556.16 97.26 155.86 0.00 36326.56 2706.39 34758.35 00:10:25.629 [2024-11-20T05:20:57.465Z] =================================================================================================================== 00:10:25.629 [2024-11-20T05:20:57.465Z] Total : 1556.16 97.26 155.86 0.00 36326.56 2706.39 34758.35 00:10:25.629 [2024-11-20 06:20:57.350837] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:10:25.629 [2024-11-20 06:20:57.350867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1770a40 (9): Bad file descriptor 00:10:25.629 [2024-11-20 06:20:57.401420] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1991672 00:10:26.560 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1991672) - No such process 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:26.560 { 00:10:26.560 "params": { 00:10:26.560 "name": "Nvme$subsystem", 00:10:26.560 "trtype": "$TEST_TRANSPORT", 00:10:26.560 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:26.560 "adrfam": "ipv4", 00:10:26.560 "trsvcid": "$NVMF_PORT", 00:10:26.560 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:26.560 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:26.560 "hdgst": ${hdgst:-false}, 00:10:26.560 "ddgst": ${ddgst:-false} 00:10:26.560 }, 00:10:26.560 "method": "bdev_nvme_attach_controller" 00:10:26.560 } 00:10:26.560 EOF 00:10:26.560 )") 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:10:26.560 06:20:58 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:26.560 "params": { 00:10:26.560 "name": "Nvme0", 00:10:26.560 "trtype": "tcp", 00:10:26.560 "traddr": "10.0.0.2", 00:10:26.560 "adrfam": "ipv4", 00:10:26.560 "trsvcid": "4420", 00:10:26.560 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:26.560 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:10:26.560 "hdgst": false, 00:10:26.560 "ddgst": false 00:10:26.560 }, 00:10:26.560 "method": "bdev_nvme_attach_controller" 00:10:26.560 }' 00:10:26.560 [2024-11-20 06:20:58.393685] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:10:26.560 [2024-11-20 06:20:58.393767] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1991899 ] 00:10:26.818 [2024-11-20 06:20:58.464289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:26.818 [2024-11-20 06:20:58.524536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:27.076 Running I/O for 1 seconds... 00:10:28.448 1664.00 IOPS, 104.00 MiB/s 00:10:28.448 Latency(us) 00:10:28.448 [2024-11-20T05:21:00.284Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:28.448 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:10:28.448 Verification LBA range: start 0x0 length 0x400 00:10:28.448 Nvme0n1 : 1.01 1703.20 106.45 0.00 0.00 36959.10 4830.25 33593.27 00:10:28.448 [2024-11-20T05:21:00.284Z] =================================================================================================================== 00:10:28.448 [2024-11-20T05:21:00.284Z] Total : 1703.20 106.45 0.00 0.00 36959.10 4830.25 33593.27 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:28.448 rmmod nvme_tcp 00:10:28.448 rmmod nvme_fabrics 00:10:28.448 rmmod nvme_keyring 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 1991620 ']' 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 1991620 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 1991620 ']' 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 1991620 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1991620 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1991620' 00:10:28.448 killing process with pid 1991620 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 1991620 00:10:28.448 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 1991620 00:10:28.708 [2024-11-20 06:21:00.435899] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:28.708 06:21:00 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:10:31.244 00:10:31.244 real 0m8.883s 00:10:31.244 user 0m19.836s 00:10:31.244 sys 0m2.756s 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:10:31.244 ************************************ 00:10:31.244 END TEST nvmf_host_management 00:10:31.244 ************************************ 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:31.244 ************************************ 00:10:31.244 START TEST nvmf_lvol 00:10:31.244 ************************************ 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:10:31.244 * Looking for test storage... 00:10:31.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:31.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.244 --rc genhtml_branch_coverage=1 00:10:31.244 --rc genhtml_function_coverage=1 00:10:31.244 --rc genhtml_legend=1 00:10:31.244 --rc geninfo_all_blocks=1 00:10:31.244 --rc geninfo_unexecuted_blocks=1 00:10:31.244 00:10:31.244 ' 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:31.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.244 --rc genhtml_branch_coverage=1 00:10:31.244 --rc genhtml_function_coverage=1 00:10:31.244 --rc genhtml_legend=1 00:10:31.244 --rc geninfo_all_blocks=1 00:10:31.244 --rc geninfo_unexecuted_blocks=1 00:10:31.244 00:10:31.244 ' 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:31.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.244 --rc genhtml_branch_coverage=1 00:10:31.244 --rc genhtml_function_coverage=1 00:10:31.244 --rc genhtml_legend=1 00:10:31.244 --rc geninfo_all_blocks=1 00:10:31.244 --rc geninfo_unexecuted_blocks=1 00:10:31.244 00:10:31.244 ' 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:31.244 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:31.244 --rc genhtml_branch_coverage=1 00:10:31.244 --rc genhtml_function_coverage=1 00:10:31.244 --rc genhtml_legend=1 00:10:31.244 --rc geninfo_all_blocks=1 00:10:31.244 --rc geninfo_unexecuted_blocks=1 00:10:31.244 00:10:31.244 ' 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:31.244 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:31.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:10:31.245 06:21:02 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:33.148 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:33.148 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:33.148 Found net devices under 0000:09:00.0: cvl_0_0 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:33.148 Found net devices under 0000:09:00.1: cvl_0_1 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:33.148 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:33.407 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.407 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.379 ms 00:10:33.407 00:10:33.407 --- 10.0.0.2 ping statistics --- 00:10:33.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.407 rtt min/avg/max/mdev = 0.379/0.379/0.379/0.000 ms 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:33.407 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.407 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:10:33.407 00:10:33.407 --- 10.0.0.1 ping statistics --- 00:10:33.407 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.407 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:33.407 06:21:04 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=1994160 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 1994160 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 1994160 ']' 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:33.407 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:33.407 [2024-11-20 06:21:05.075353] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:10:33.407 [2024-11-20 06:21:05.075430] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.407 [2024-11-20 06:21:05.150101] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:33.407 [2024-11-20 06:21:05.208902] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.407 [2024-11-20 06:21:05.208952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.407 [2024-11-20 06:21:05.208979] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.407 [2024-11-20 06:21:05.208990] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.407 [2024-11-20 06:21:05.209000] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.407 [2024-11-20 06:21:05.210599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.407 [2024-11-20 06:21:05.210630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.407 [2024-11-20 06:21:05.210649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.665 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:33.665 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:10:33.665 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:33.665 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:33.665 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:33.665 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.665 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:33.922 [2024-11-20 06:21:05.613192] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.922 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.181 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:10:34.181 06:21:05 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:34.440 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:10:34.440 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:10:34.698 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:10:35.264 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=f85f744e-a82b-48e8-8c69-cf6e53eb6289 00:10:35.264 06:21:06 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u f85f744e-a82b-48e8-8c69-cf6e53eb6289 lvol 20 00:10:35.264 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=d9e7bca1-619a-43cb-96f0-61e61c2917cc 00:10:35.264 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:35.829 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d9e7bca1-619a-43cb-96f0-61e61c2917cc 00:10:36.087 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:36.345 [2024-11-20 06:21:07.937793] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:36.345 06:21:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:36.604 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1994952 00:10:36.604 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:10:36.604 06:21:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:10:37.537 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot d9e7bca1-619a-43cb-96f0-61e61c2917cc MY_SNAPSHOT 00:10:37.794 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=d92e9035-20d2-4625-aeba-1f7b50476c26 00:10:37.795 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize d9e7bca1-619a-43cb-96f0-61e61c2917cc 30 00:10:38.053 06:21:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone d92e9035-20d2-4625-aeba-1f7b50476c26 MY_CLONE 00:10:38.620 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=b637f922-de3e-46c8-8b9a-cef1845084b7 00:10:38.620 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate b637f922-de3e-46c8-8b9a-cef1845084b7 00:10:39.187 06:21:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1994952 00:10:47.294 Initializing NVMe Controllers 00:10:47.294 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:10:47.294 Controller IO queue size 128, less than required. 00:10:47.294 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:10:47.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:10:47.294 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:10:47.294 Initialization complete. Launching workers. 00:10:47.294 ======================================================== 00:10:47.294 Latency(us) 00:10:47.294 Device Information : IOPS MiB/s Average min max 00:10:47.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10386.80 40.57 12325.25 2162.45 78037.99 00:10:47.294 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10196.60 39.83 12561.56 2281.52 77106.84 00:10:47.294 ======================================================== 00:10:47.294 Total : 20583.40 80.40 12442.32 2162.45 78037.99 00:10:47.294 00:10:47.294 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:47.294 06:21:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d9e7bca1-619a-43cb-96f0-61e61c2917cc 00:10:47.551 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f85f744e-a82b-48e8-8c69-cf6e53eb6289 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:47.814 rmmod nvme_tcp 00:10:47.814 rmmod nvme_fabrics 00:10:47.814 rmmod nvme_keyring 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 1994160 ']' 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 1994160 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 1994160 ']' 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 1994160 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1994160 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1994160' 00:10:47.814 killing process with pid 1994160 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 1994160 00:10:47.814 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 1994160 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:48.072 06:21:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.613 06:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:50.613 00:10:50.613 real 0m19.340s 00:10:50.613 user 1m5.439s 00:10:50.613 sys 0m5.748s 00:10:50.613 06:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:10:50.613 06:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:10:50.613 ************************************ 00:10:50.613 END TEST nvmf_lvol 00:10:50.613 ************************************ 00:10:50.613 06:21:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:50.613 06:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:10:50.613 06:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:50.613 06:21:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:50.613 ************************************ 00:10:50.613 START TEST nvmf_lvs_grow 00:10:50.613 ************************************ 00:10:50.613 06:21:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:10:50.613 * Looking for test storage... 00:10:50.613 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:10:50.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.613 --rc genhtml_branch_coverage=1 00:10:50.613 --rc genhtml_function_coverage=1 00:10:50.613 --rc genhtml_legend=1 00:10:50.613 --rc geninfo_all_blocks=1 00:10:50.613 --rc geninfo_unexecuted_blocks=1 00:10:50.613 00:10:50.613 ' 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:10:50.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.613 --rc genhtml_branch_coverage=1 00:10:50.613 --rc genhtml_function_coverage=1 00:10:50.613 --rc genhtml_legend=1 00:10:50.613 --rc geninfo_all_blocks=1 00:10:50.613 --rc geninfo_unexecuted_blocks=1 00:10:50.613 00:10:50.613 ' 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:10:50.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.613 --rc genhtml_branch_coverage=1 00:10:50.613 --rc genhtml_function_coverage=1 00:10:50.613 --rc genhtml_legend=1 00:10:50.613 --rc geninfo_all_blocks=1 00:10:50.613 --rc geninfo_unexecuted_blocks=1 00:10:50.613 00:10:50.613 ' 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:10:50.613 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:50.613 --rc genhtml_branch_coverage=1 00:10:50.613 --rc genhtml_function_coverage=1 00:10:50.613 --rc genhtml_legend=1 00:10:50.613 --rc geninfo_all_blocks=1 00:10:50.613 --rc geninfo_unexecuted_blocks=1 00:10:50.613 00:10:50.613 ' 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:50.613 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:50.614 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:10:50.614 06:21:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:10:52.519 Found 0000:09:00.0 (0x8086 - 0x159b) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:10:52.519 Found 0000:09:00.1 (0x8086 - 0x159b) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:10:52.519 Found net devices under 0000:09:00.0: cvl_0_0 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:52.519 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:10:52.520 Found net devices under 0000:09:00.1: cvl_0_1 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:52.520 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:52.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:52.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:10:52.779 00:10:52.779 --- 10.0.0.2 ping statistics --- 00:10:52.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.779 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:52.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:52.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:10:52.779 00:10:52.779 --- 10.0.0.1 ping statistics --- 00:10:52.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:52.779 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=1998448 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 1998448 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 1998448 ']' 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:52.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:52.779 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:52.779 [2024-11-20 06:21:24.454158] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:10:52.779 [2024-11-20 06:21:24.454251] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:52.779 [2024-11-20 06:21:24.526829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:52.779 [2024-11-20 06:21:24.585549] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:52.779 [2024-11-20 06:21:24.585617] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:52.779 [2024-11-20 06:21:24.585630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:52.779 [2024-11-20 06:21:24.585656] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:52.779 [2024-11-20 06:21:24.585666] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:52.779 [2024-11-20 06:21:24.586332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:53.038 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:53.038 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:10:53.038 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:53.038 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:53.038 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:53.038 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:53.038 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:53.297 [2024-11-20 06:21:24.976298] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:53.297 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:10:53.297 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:10:53.297 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:10:53.297 06:21:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:53.297 ************************************ 00:10:53.297 START TEST lvs_grow_clean 00:10:53.297 ************************************ 00:10:53.297 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:10:53.297 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:53.297 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:53.297 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:53.297 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:53.297 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:53.297 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:53.297 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:53.297 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:53.297 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:53.555 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:53.556 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:53.814 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=0e8c9bad-1b55-406e-addb-fc1773da407d 00:10:53.814 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:10:53.814 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:54.072 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:54.072 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:54.072 06:21:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 0e8c9bad-1b55-406e-addb-fc1773da407d lvol 150 00:10:54.329 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=47a540b4-6137-4eaa-ae21-4628e0201d9d 00:10:54.329 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:54.329 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:54.588 [2024-11-20 06:21:26.368699] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:54.588 [2024-11-20 06:21:26.368794] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:54.588 true 00:10:54.588 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:10:54.588 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:54.846 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:54.846 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:55.104 06:21:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 47a540b4-6137-4eaa-ae21-4628e0201d9d 00:10:55.670 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:55.670 [2024-11-20 06:21:27.451995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:55.670 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:55.927 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1998870 00:10:55.927 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:55.927 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:55.927 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1998870 /var/tmp/bdevperf.sock 00:10:55.927 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 1998870 ']' 00:10:55.927 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:55.927 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:10:55.927 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:55.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:55.927 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:10:55.927 06:21:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:56.184 [2024-11-20 06:21:27.786559] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:10:56.184 [2024-11-20 06:21:27.786668] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1998870 ] 00:10:56.184 [2024-11-20 06:21:27.853733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:56.184 [2024-11-20 06:21:27.910947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:56.472 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:10:56.472 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:10:56.472 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:56.757 Nvme0n1 00:10:56.757 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:57.018 [ 00:10:57.018 { 00:10:57.018 "name": "Nvme0n1", 00:10:57.018 "aliases": [ 00:10:57.018 "47a540b4-6137-4eaa-ae21-4628e0201d9d" 00:10:57.018 ], 00:10:57.018 "product_name": "NVMe disk", 00:10:57.018 "block_size": 4096, 00:10:57.018 "num_blocks": 38912, 00:10:57.018 "uuid": "47a540b4-6137-4eaa-ae21-4628e0201d9d", 00:10:57.018 "numa_id": 0, 00:10:57.018 "assigned_rate_limits": { 00:10:57.018 "rw_ios_per_sec": 0, 00:10:57.018 "rw_mbytes_per_sec": 0, 00:10:57.018 "r_mbytes_per_sec": 0, 00:10:57.018 "w_mbytes_per_sec": 0 00:10:57.018 }, 00:10:57.018 "claimed": false, 00:10:57.018 "zoned": false, 00:10:57.018 "supported_io_types": { 00:10:57.018 "read": true, 00:10:57.018 "write": true, 00:10:57.018 "unmap": true, 00:10:57.018 "flush": true, 00:10:57.018 "reset": true, 00:10:57.018 "nvme_admin": true, 00:10:57.018 "nvme_io": true, 00:10:57.018 "nvme_io_md": false, 00:10:57.018 "write_zeroes": true, 00:10:57.018 "zcopy": false, 00:10:57.018 "get_zone_info": false, 00:10:57.018 "zone_management": false, 00:10:57.018 "zone_append": false, 00:10:57.018 "compare": true, 00:10:57.018 "compare_and_write": true, 00:10:57.018 "abort": true, 00:10:57.018 "seek_hole": false, 00:10:57.018 "seek_data": false, 00:10:57.018 "copy": true, 00:10:57.018 "nvme_iov_md": false 00:10:57.018 }, 00:10:57.018 "memory_domains": [ 00:10:57.018 { 00:10:57.018 "dma_device_id": "system", 00:10:57.018 "dma_device_type": 1 00:10:57.018 } 00:10:57.018 ], 00:10:57.018 "driver_specific": { 00:10:57.018 "nvme": [ 00:10:57.018 { 00:10:57.018 "trid": { 00:10:57.018 "trtype": "TCP", 00:10:57.018 "adrfam": "IPv4", 00:10:57.018 "traddr": "10.0.0.2", 00:10:57.018 "trsvcid": "4420", 00:10:57.018 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:57.018 }, 00:10:57.018 "ctrlr_data": { 00:10:57.018 "cntlid": 1, 00:10:57.018 "vendor_id": "0x8086", 00:10:57.018 "model_number": "SPDK bdev Controller", 00:10:57.018 "serial_number": "SPDK0", 00:10:57.018 "firmware_revision": "25.01", 00:10:57.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:57.018 "oacs": { 00:10:57.018 "security": 0, 00:10:57.018 "format": 0, 00:10:57.018 "firmware": 0, 00:10:57.018 "ns_manage": 0 00:10:57.018 }, 00:10:57.018 "multi_ctrlr": true, 00:10:57.018 "ana_reporting": false 00:10:57.018 }, 00:10:57.018 "vs": { 00:10:57.018 "nvme_version": "1.3" 00:10:57.018 }, 00:10:57.018 "ns_data": { 00:10:57.018 "id": 1, 00:10:57.018 "can_share": true 00:10:57.018 } 00:10:57.018 } 00:10:57.018 ], 00:10:57.018 "mp_policy": "active_passive" 00:10:57.018 } 00:10:57.018 } 00:10:57.018 ] 00:10:57.018 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1998956 00:10:57.018 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:57.018 06:21:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:57.018 Running I/O for 10 seconds... 00:10:57.948 Latency(us) 00:10:57.948 [2024-11-20T05:21:29.784Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:57.948 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:57.948 Nvme0n1 : 1.00 14934.00 58.34 0.00 0.00 0.00 0.00 0.00 00:10:57.948 [2024-11-20T05:21:29.784Z] =================================================================================================================== 00:10:57.948 [2024-11-20T05:21:29.784Z] Total : 14934.00 58.34 0.00 0.00 0.00 0.00 0.00 00:10:57.948 00:10:58.880 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:10:59.138 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.138 Nvme0n1 : 2.00 15121.00 59.07 0.00 0.00 0.00 0.00 0.00 00:10:59.138 [2024-11-20T05:21:30.974Z] =================================================================================================================== 00:10:59.138 [2024-11-20T05:21:30.974Z] Total : 15121.00 59.07 0.00 0.00 0.00 0.00 0.00 00:10:59.138 00:10:59.138 true 00:10:59.138 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:10:59.138 06:21:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:59.396 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:59.396 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:59.396 06:21:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1998956 00:10:59.964 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:59.964 Nvme0n1 : 3.00 15248.33 59.56 0.00 0.00 0.00 0.00 0.00 00:10:59.964 [2024-11-20T05:21:31.800Z] =================================================================================================================== 00:10:59.964 [2024-11-20T05:21:31.800Z] Total : 15248.33 59.56 0.00 0.00 0.00 0.00 0.00 00:10:59.964 00:11:01.337 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:01.337 Nvme0n1 : 4.00 15329.00 59.88 0.00 0.00 0.00 0.00 0.00 00:11:01.337 [2024-11-20T05:21:33.173Z] =================================================================================================================== 00:11:01.337 [2024-11-20T05:21:33.173Z] Total : 15329.00 59.88 0.00 0.00 0.00 0.00 0.00 00:11:01.337 00:11:02.272 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:02.272 Nvme0n1 : 5.00 15403.20 60.17 0.00 0.00 0.00 0.00 0.00 00:11:02.272 [2024-11-20T05:21:34.108Z] =================================================================================================================== 00:11:02.272 [2024-11-20T05:21:34.108Z] Total : 15403.20 60.17 0.00 0.00 0.00 0.00 0.00 00:11:02.272 00:11:03.208 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:03.208 Nvme0n1 : 6.00 15473.50 60.44 0.00 0.00 0.00 0.00 0.00 00:11:03.208 [2024-11-20T05:21:35.044Z] =================================================================================================================== 00:11:03.208 [2024-11-20T05:21:35.044Z] Total : 15473.50 60.44 0.00 0.00 0.00 0.00 0.00 00:11:03.208 00:11:04.142 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:04.142 Nvme0n1 : 7.00 15534.00 60.68 0.00 0.00 0.00 0.00 0.00 00:11:04.142 [2024-11-20T05:21:35.978Z] =================================================================================================================== 00:11:04.142 [2024-11-20T05:21:35.978Z] Total : 15534.00 60.68 0.00 0.00 0.00 0.00 0.00 00:11:04.142 00:11:05.075 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:05.075 Nvme0n1 : 8.00 15563.00 60.79 0.00 0.00 0.00 0.00 0.00 00:11:05.075 [2024-11-20T05:21:36.911Z] =================================================================================================================== 00:11:05.075 [2024-11-20T05:21:36.911Z] Total : 15563.00 60.79 0.00 0.00 0.00 0.00 0.00 00:11:05.075 00:11:06.008 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.008 Nvme0n1 : 9.00 15580.56 60.86 0.00 0.00 0.00 0.00 0.00 00:11:06.008 [2024-11-20T05:21:37.844Z] =================================================================================================================== 00:11:06.008 [2024-11-20T05:21:37.844Z] Total : 15580.56 60.86 0.00 0.00 0.00 0.00 0.00 00:11:06.008 00:11:06.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.943 Nvme0n1 : 10.00 15611.20 60.98 0.00 0.00 0.00 0.00 0.00 00:11:06.943 [2024-11-20T05:21:38.779Z] =================================================================================================================== 00:11:06.943 [2024-11-20T05:21:38.779Z] Total : 15611.20 60.98 0.00 0.00 0.00 0.00 0.00 00:11:06.943 00:11:06.943 00:11:06.943 Latency(us) 00:11:06.943 [2024-11-20T05:21:38.779Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:06.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:06.943 Nvme0n1 : 10.00 15617.02 61.00 0.00 0.00 8191.49 4563.25 17379.18 00:11:06.943 [2024-11-20T05:21:38.779Z] =================================================================================================================== 00:11:06.943 [2024-11-20T05:21:38.779Z] Total : 15617.02 61.00 0.00 0.00 8191.49 4563.25 17379.18 00:11:06.943 { 00:11:06.943 "results": [ 00:11:06.943 { 00:11:06.943 "job": "Nvme0n1", 00:11:06.943 "core_mask": "0x2", 00:11:06.943 "workload": "randwrite", 00:11:06.943 "status": "finished", 00:11:06.943 "queue_depth": 128, 00:11:06.943 "io_size": 4096, 00:11:06.943 "runtime": 10.004471, 00:11:06.943 "iops": 15617.017631416993, 00:11:06.943 "mibps": 61.00397512272263, 00:11:06.943 "io_failed": 0, 00:11:06.943 "io_timeout": 0, 00:11:06.943 "avg_latency_us": 8191.490437124272, 00:11:06.943 "min_latency_us": 4563.247407407407, 00:11:06.943 "max_latency_us": 17379.176296296297 00:11:06.943 } 00:11:06.943 ], 00:11:06.943 "core_count": 1 00:11:06.943 } 00:11:06.943 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1998870 00:11:06.943 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 1998870 ']' 00:11:06.944 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 1998870 00:11:06.944 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:11:06.944 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:06.944 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 1998870 00:11:07.202 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:07.202 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:07.202 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 1998870' 00:11:07.202 killing process with pid 1998870 00:11:07.202 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 1998870 00:11:07.202 Received shutdown signal, test time was about 10.000000 seconds 00:11:07.202 00:11:07.202 Latency(us) 00:11:07.202 [2024-11-20T05:21:39.038Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:07.202 [2024-11-20T05:21:39.038Z] =================================================================================================================== 00:11:07.202 [2024-11-20T05:21:39.038Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:07.202 06:21:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 1998870 00:11:07.202 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:07.769 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:08.028 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:11:08.028 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:08.287 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:08.287 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:11:08.287 06:21:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:08.546 [2024-11-20 06:21:40.162270] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:08.546 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:11:08.804 request: 00:11:08.804 { 00:11:08.804 "uuid": "0e8c9bad-1b55-406e-addb-fc1773da407d", 00:11:08.804 "method": "bdev_lvol_get_lvstores", 00:11:08.804 "req_id": 1 00:11:08.804 } 00:11:08.804 Got JSON-RPC error response 00:11:08.804 response: 00:11:08.804 { 00:11:08.804 "code": -19, 00:11:08.804 "message": "No such device" 00:11:08.804 } 00:11:08.804 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:11:08.804 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:08.804 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:08.804 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:08.804 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:09.062 aio_bdev 00:11:09.062 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 47a540b4-6137-4eaa-ae21-4628e0201d9d 00:11:09.062 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=47a540b4-6137-4eaa-ae21-4628e0201d9d 00:11:09.062 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:09.062 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:11:09.062 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:09.062 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:09.062 06:21:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:09.320 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 47a540b4-6137-4eaa-ae21-4628e0201d9d -t 2000 00:11:09.579 [ 00:11:09.579 { 00:11:09.579 "name": "47a540b4-6137-4eaa-ae21-4628e0201d9d", 00:11:09.579 "aliases": [ 00:11:09.579 "lvs/lvol" 00:11:09.579 ], 00:11:09.579 "product_name": "Logical Volume", 00:11:09.579 "block_size": 4096, 00:11:09.579 "num_blocks": 38912, 00:11:09.579 "uuid": "47a540b4-6137-4eaa-ae21-4628e0201d9d", 00:11:09.579 "assigned_rate_limits": { 00:11:09.579 "rw_ios_per_sec": 0, 00:11:09.579 "rw_mbytes_per_sec": 0, 00:11:09.579 "r_mbytes_per_sec": 0, 00:11:09.579 "w_mbytes_per_sec": 0 00:11:09.579 }, 00:11:09.579 "claimed": false, 00:11:09.579 "zoned": false, 00:11:09.579 "supported_io_types": { 00:11:09.579 "read": true, 00:11:09.579 "write": true, 00:11:09.579 "unmap": true, 00:11:09.579 "flush": false, 00:11:09.579 "reset": true, 00:11:09.579 "nvme_admin": false, 00:11:09.579 "nvme_io": false, 00:11:09.579 "nvme_io_md": false, 00:11:09.579 "write_zeroes": true, 00:11:09.579 "zcopy": false, 00:11:09.579 "get_zone_info": false, 00:11:09.579 "zone_management": false, 00:11:09.579 "zone_append": false, 00:11:09.579 "compare": false, 00:11:09.579 "compare_and_write": false, 00:11:09.579 "abort": false, 00:11:09.579 "seek_hole": true, 00:11:09.579 "seek_data": true, 00:11:09.579 "copy": false, 00:11:09.579 "nvme_iov_md": false 00:11:09.579 }, 00:11:09.579 "driver_specific": { 00:11:09.579 "lvol": { 00:11:09.579 "lvol_store_uuid": "0e8c9bad-1b55-406e-addb-fc1773da407d", 00:11:09.579 "base_bdev": "aio_bdev", 00:11:09.579 "thin_provision": false, 00:11:09.579 "num_allocated_clusters": 38, 00:11:09.579 "snapshot": false, 00:11:09.579 "clone": false, 00:11:09.579 "esnap_clone": false 00:11:09.579 } 00:11:09.579 } 00:11:09.579 } 00:11:09.579 ] 00:11:09.579 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:11:09.579 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:11:09.579 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:09.836 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:09.836 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:11:09.836 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:10.093 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:10.093 06:21:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 47a540b4-6137-4eaa-ae21-4628e0201d9d 00:11:10.352 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 0e8c9bad-1b55-406e-addb-fc1773da407d 00:11:10.609 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:10.867 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:10.867 00:11:10.867 real 0m17.658s 00:11:10.867 user 0m16.321s 00:11:10.867 sys 0m2.233s 00:11:10.867 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:10.867 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:11:10.867 ************************************ 00:11:10.867 END TEST lvs_grow_clean 00:11:10.867 ************************************ 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:11.125 ************************************ 00:11:11.125 START TEST lvs_grow_dirty 00:11:11.125 ************************************ 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:11.125 06:21:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:11.383 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:11:11.383 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:11:11.641 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:11.641 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:11.641 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:11:11.899 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:11:11.899 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:11:11.899 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 lvol 150 00:11:12.157 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=e7ce7e1a-44c0-4fd8-83ad-cae384467839 00:11:12.157 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:12.157 06:21:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:11:12.415 [2024-11-20 06:21:44.076699] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:11:12.415 [2024-11-20 06:21:44.076792] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:11:12.416 true 00:11:12.416 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:12.416 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:11:12.674 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:11:12.674 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:11:12.932 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 e7ce7e1a-44c0-4fd8-83ad-cae384467839 00:11:13.190 06:21:44 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:11:13.448 [2024-11-20 06:21:45.147979] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:13.448 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:13.706 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2001011 00:11:13.706 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:11:13.706 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:13.706 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2001011 /var/tmp/bdevperf.sock 00:11:13.706 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2001011 ']' 00:11:13.706 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:13.706 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:13.706 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:13.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:13.706 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:13.706 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:13.706 [2024-11-20 06:21:45.472269] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:11:13.706 [2024-11-20 06:21:45.472367] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2001011 ] 00:11:13.706 [2024-11-20 06:21:45.538623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:13.964 [2024-11-20 06:21:45.600770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:13.964 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:13.964 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:11:13.964 06:21:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:11:14.222 Nvme0n1 00:11:14.481 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:11:14.481 [ 00:11:14.481 { 00:11:14.481 "name": "Nvme0n1", 00:11:14.481 "aliases": [ 00:11:14.481 "e7ce7e1a-44c0-4fd8-83ad-cae384467839" 00:11:14.481 ], 00:11:14.481 "product_name": "NVMe disk", 00:11:14.481 "block_size": 4096, 00:11:14.481 "num_blocks": 38912, 00:11:14.481 "uuid": "e7ce7e1a-44c0-4fd8-83ad-cae384467839", 00:11:14.481 "numa_id": 0, 00:11:14.481 "assigned_rate_limits": { 00:11:14.481 "rw_ios_per_sec": 0, 00:11:14.481 "rw_mbytes_per_sec": 0, 00:11:14.481 "r_mbytes_per_sec": 0, 00:11:14.481 "w_mbytes_per_sec": 0 00:11:14.481 }, 00:11:14.481 "claimed": false, 00:11:14.481 "zoned": false, 00:11:14.481 "supported_io_types": { 00:11:14.481 "read": true, 00:11:14.481 "write": true, 00:11:14.481 "unmap": true, 00:11:14.481 "flush": true, 00:11:14.481 "reset": true, 00:11:14.481 "nvme_admin": true, 00:11:14.481 "nvme_io": true, 00:11:14.481 "nvme_io_md": false, 00:11:14.481 "write_zeroes": true, 00:11:14.481 "zcopy": false, 00:11:14.481 "get_zone_info": false, 00:11:14.481 "zone_management": false, 00:11:14.481 "zone_append": false, 00:11:14.481 "compare": true, 00:11:14.481 "compare_and_write": true, 00:11:14.481 "abort": true, 00:11:14.481 "seek_hole": false, 00:11:14.481 "seek_data": false, 00:11:14.481 "copy": true, 00:11:14.481 "nvme_iov_md": false 00:11:14.481 }, 00:11:14.481 "memory_domains": [ 00:11:14.481 { 00:11:14.481 "dma_device_id": "system", 00:11:14.481 "dma_device_type": 1 00:11:14.481 } 00:11:14.481 ], 00:11:14.481 "driver_specific": { 00:11:14.481 "nvme": [ 00:11:14.481 { 00:11:14.481 "trid": { 00:11:14.481 "trtype": "TCP", 00:11:14.481 "adrfam": "IPv4", 00:11:14.481 "traddr": "10.0.0.2", 00:11:14.481 "trsvcid": "4420", 00:11:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:11:14.481 }, 00:11:14.481 "ctrlr_data": { 00:11:14.481 "cntlid": 1, 00:11:14.481 "vendor_id": "0x8086", 00:11:14.481 "model_number": "SPDK bdev Controller", 00:11:14.481 "serial_number": "SPDK0", 00:11:14.481 "firmware_revision": "25.01", 00:11:14.481 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:11:14.481 "oacs": { 00:11:14.481 "security": 0, 00:11:14.481 "format": 0, 00:11:14.481 "firmware": 0, 00:11:14.481 "ns_manage": 0 00:11:14.481 }, 00:11:14.481 "multi_ctrlr": true, 00:11:14.481 "ana_reporting": false 00:11:14.481 }, 00:11:14.481 "vs": { 00:11:14.481 "nvme_version": "1.3" 00:11:14.481 }, 00:11:14.481 "ns_data": { 00:11:14.481 "id": 1, 00:11:14.481 "can_share": true 00:11:14.481 } 00:11:14.481 } 00:11:14.481 ], 00:11:14.481 "mp_policy": "active_passive" 00:11:14.481 } 00:11:14.481 } 00:11:14.481 ] 00:11:14.739 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2001142 00:11:14.739 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:11:14.739 06:21:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:14.739 Running I/O for 10 seconds... 00:11:15.676 Latency(us) 00:11:15.676 [2024-11-20T05:21:47.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:15.676 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:15.676 Nvme0n1 : 1.00 14798.00 57.80 0.00 0.00 0.00 0.00 0.00 00:11:15.676 [2024-11-20T05:21:47.512Z] =================================================================================================================== 00:11:15.676 [2024-11-20T05:21:47.512Z] Total : 14798.00 57.80 0.00 0.00 0.00 0.00 0.00 00:11:15.676 00:11:16.611 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:16.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:16.611 Nvme0n1 : 2.00 15114.50 59.04 0.00 0.00 0.00 0.00 0.00 00:11:16.612 [2024-11-20T05:21:48.448Z] =================================================================================================================== 00:11:16.612 [2024-11-20T05:21:48.448Z] Total : 15114.50 59.04 0.00 0.00 0.00 0.00 0.00 00:11:16.612 00:11:16.871 true 00:11:16.871 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:16.871 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:11:17.129 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:11:17.129 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:11:17.129 06:21:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2001142 00:11:17.696 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:17.696 Nvme0n1 : 3.00 15262.33 59.62 0.00 0.00 0.00 0.00 0.00 00:11:17.696 [2024-11-20T05:21:49.532Z] =================================================================================================================== 00:11:17.696 [2024-11-20T05:21:49.532Z] Total : 15262.33 59.62 0.00 0.00 0.00 0.00 0.00 00:11:17.696 00:11:18.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:18.629 Nvme0n1 : 4.00 15352.00 59.97 0.00 0.00 0.00 0.00 0.00 00:11:18.629 [2024-11-20T05:21:50.465Z] =================================================================================================================== 00:11:18.629 [2024-11-20T05:21:50.465Z] Total : 15352.00 59.97 0.00 0.00 0.00 0.00 0.00 00:11:18.629 00:11:20.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.004 Nvme0n1 : 5.00 15431.20 60.28 0.00 0.00 0.00 0.00 0.00 00:11:20.004 [2024-11-20T05:21:51.840Z] =================================================================================================================== 00:11:20.004 [2024-11-20T05:21:51.840Z] Total : 15431.20 60.28 0.00 0.00 0.00 0.00 0.00 00:11:20.004 00:11:20.937 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:20.937 Nvme0n1 : 6.00 15494.83 60.53 0.00 0.00 0.00 0.00 0.00 00:11:20.937 [2024-11-20T05:21:52.773Z] =================================================================================================================== 00:11:20.937 [2024-11-20T05:21:52.773Z] Total : 15494.83 60.53 0.00 0.00 0.00 0.00 0.00 00:11:20.938 00:11:21.871 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:21.871 Nvme0n1 : 7.00 15531.00 60.67 0.00 0.00 0.00 0.00 0.00 00:11:21.871 [2024-11-20T05:21:53.707Z] =================================================================================================================== 00:11:21.871 [2024-11-20T05:21:53.707Z] Total : 15531.00 60.67 0.00 0.00 0.00 0.00 0.00 00:11:21.871 00:11:22.864 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:22.864 Nvme0n1 : 8.00 15566.25 60.81 0.00 0.00 0.00 0.00 0.00 00:11:22.864 [2024-11-20T05:21:54.700Z] =================================================================================================================== 00:11:22.864 [2024-11-20T05:21:54.700Z] Total : 15566.25 60.81 0.00 0.00 0.00 0.00 0.00 00:11:22.864 00:11:23.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:23.815 Nvme0n1 : 9.00 15614.67 60.99 0.00 0.00 0.00 0.00 0.00 00:11:23.815 [2024-11-20T05:21:55.652Z] =================================================================================================================== 00:11:23.816 [2024-11-20T05:21:55.652Z] Total : 15614.67 60.99 0.00 0.00 0.00 0.00 0.00 00:11:23.816 00:11:24.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.751 Nvme0n1 : 10.00 15635.10 61.07 0.00 0.00 0.00 0.00 0.00 00:11:24.751 [2024-11-20T05:21:56.587Z] =================================================================================================================== 00:11:24.751 [2024-11-20T05:21:56.587Z] Total : 15635.10 61.07 0.00 0.00 0.00 0.00 0.00 00:11:24.751 00:11:24.751 00:11:24.751 Latency(us) 00:11:24.751 [2024-11-20T05:21:56.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.751 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:11:24.751 Nvme0n1 : 10.00 15640.94 61.10 0.00 0.00 8179.08 4538.97 19903.53 00:11:24.751 [2024-11-20T05:21:56.587Z] =================================================================================================================== 00:11:24.751 [2024-11-20T05:21:56.587Z] Total : 15640.94 61.10 0.00 0.00 8179.08 4538.97 19903.53 00:11:24.751 { 00:11:24.751 "results": [ 00:11:24.751 { 00:11:24.751 "job": "Nvme0n1", 00:11:24.751 "core_mask": "0x2", 00:11:24.751 "workload": "randwrite", 00:11:24.751 "status": "finished", 00:11:24.751 "queue_depth": 128, 00:11:24.751 "io_size": 4096, 00:11:24.751 "runtime": 10.004448, 00:11:24.751 "iops": 15640.942908594257, 00:11:24.751 "mibps": 61.09743323669632, 00:11:24.751 "io_failed": 0, 00:11:24.751 "io_timeout": 0, 00:11:24.751 "avg_latency_us": 8179.077104115025, 00:11:24.751 "min_latency_us": 4538.974814814815, 00:11:24.751 "max_latency_us": 19903.525925925926 00:11:24.751 } 00:11:24.751 ], 00:11:24.751 "core_count": 1 00:11:24.751 } 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2001011 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2001011 ']' 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2001011 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2001011 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2001011' 00:11:24.751 killing process with pid 2001011 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2001011 00:11:24.751 Received shutdown signal, test time was about 10.000000 seconds 00:11:24.751 00:11:24.751 Latency(us) 00:11:24.751 [2024-11-20T05:21:56.587Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.751 [2024-11-20T05:21:56.587Z] =================================================================================================================== 00:11:24.751 [2024-11-20T05:21:56.587Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:24.751 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2001011 00:11:25.008 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:25.266 06:21:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:11:25.524 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:25.524 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1998448 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1998448 00:11:25.781 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1998448 Killed "${NVMF_APP[@]}" "$@" 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2002487 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2002487 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2002487 ']' 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:25.781 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:25.781 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:26.040 [2024-11-20 06:21:57.659084] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:11:26.040 [2024-11-20 06:21:57.659162] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:26.040 [2024-11-20 06:21:57.727993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.040 [2024-11-20 06:21:57.780985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:26.040 [2024-11-20 06:21:57.781034] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:26.040 [2024-11-20 06:21:57.781061] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:26.040 [2024-11-20 06:21:57.781072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:26.040 [2024-11-20 06:21:57.781081] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:26.040 [2024-11-20 06:21:57.781649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.298 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:26.298 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:11:26.298 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:26.298 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:26.298 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:26.298 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:26.298 06:21:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:26.556 [2024-11-20 06:21:58.160133] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:11:26.556 [2024-11-20 06:21:58.160256] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:11:26.556 [2024-11-20 06:21:58.160309] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:11:26.556 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:11:26.556 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev e7ce7e1a-44c0-4fd8-83ad-cae384467839 00:11:26.556 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e7ce7e1a-44c0-4fd8-83ad-cae384467839 00:11:26.556 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:26.556 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:26.556 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:26.556 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:26.556 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:26.815 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7ce7e1a-44c0-4fd8-83ad-cae384467839 -t 2000 00:11:27.073 [ 00:11:27.073 { 00:11:27.073 "name": "e7ce7e1a-44c0-4fd8-83ad-cae384467839", 00:11:27.073 "aliases": [ 00:11:27.073 "lvs/lvol" 00:11:27.073 ], 00:11:27.073 "product_name": "Logical Volume", 00:11:27.073 "block_size": 4096, 00:11:27.073 "num_blocks": 38912, 00:11:27.073 "uuid": "e7ce7e1a-44c0-4fd8-83ad-cae384467839", 00:11:27.073 "assigned_rate_limits": { 00:11:27.073 "rw_ios_per_sec": 0, 00:11:27.073 "rw_mbytes_per_sec": 0, 00:11:27.073 "r_mbytes_per_sec": 0, 00:11:27.073 "w_mbytes_per_sec": 0 00:11:27.073 }, 00:11:27.073 "claimed": false, 00:11:27.073 "zoned": false, 00:11:27.073 "supported_io_types": { 00:11:27.073 "read": true, 00:11:27.073 "write": true, 00:11:27.073 "unmap": true, 00:11:27.073 "flush": false, 00:11:27.073 "reset": true, 00:11:27.073 "nvme_admin": false, 00:11:27.073 "nvme_io": false, 00:11:27.073 "nvme_io_md": false, 00:11:27.073 "write_zeroes": true, 00:11:27.073 "zcopy": false, 00:11:27.073 "get_zone_info": false, 00:11:27.073 "zone_management": false, 00:11:27.073 "zone_append": false, 00:11:27.073 "compare": false, 00:11:27.073 "compare_and_write": false, 00:11:27.073 "abort": false, 00:11:27.073 "seek_hole": true, 00:11:27.073 "seek_data": true, 00:11:27.073 "copy": false, 00:11:27.073 "nvme_iov_md": false 00:11:27.073 }, 00:11:27.073 "driver_specific": { 00:11:27.073 "lvol": { 00:11:27.073 "lvol_store_uuid": "c1c7ab43-7c10-4f50-bcac-0415a45451c1", 00:11:27.073 "base_bdev": "aio_bdev", 00:11:27.073 "thin_provision": false, 00:11:27.073 "num_allocated_clusters": 38, 00:11:27.073 "snapshot": false, 00:11:27.073 "clone": false, 00:11:27.073 "esnap_clone": false 00:11:27.073 } 00:11:27.073 } 00:11:27.073 } 00:11:27.073 ] 00:11:27.073 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:27.073 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:27.073 06:21:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:11:27.330 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:11:27.331 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:27.331 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:11:27.587 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:11:27.587 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:27.845 [2024-11-20 06:21:59.529911] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:11:27.845 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:28.103 request: 00:11:28.103 { 00:11:28.103 "uuid": "c1c7ab43-7c10-4f50-bcac-0415a45451c1", 00:11:28.103 "method": "bdev_lvol_get_lvstores", 00:11:28.103 "req_id": 1 00:11:28.103 } 00:11:28.103 Got JSON-RPC error response 00:11:28.103 response: 00:11:28.103 { 00:11:28.103 "code": -19, 00:11:28.103 "message": "No such device" 00:11:28.103 } 00:11:28.103 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:11:28.103 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:11:28.103 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:11:28.103 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:11:28.103 06:21:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:11:28.360 aio_bdev 00:11:28.360 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev e7ce7e1a-44c0-4fd8-83ad-cae384467839 00:11:28.360 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=e7ce7e1a-44c0-4fd8-83ad-cae384467839 00:11:28.360 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:11:28.360 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:11:28.360 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:11:28.360 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:11:28.360 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:11:28.618 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b e7ce7e1a-44c0-4fd8-83ad-cae384467839 -t 2000 00:11:28.877 [ 00:11:28.877 { 00:11:28.877 "name": "e7ce7e1a-44c0-4fd8-83ad-cae384467839", 00:11:28.877 "aliases": [ 00:11:28.877 "lvs/lvol" 00:11:28.877 ], 00:11:28.877 "product_name": "Logical Volume", 00:11:28.877 "block_size": 4096, 00:11:28.877 "num_blocks": 38912, 00:11:28.877 "uuid": "e7ce7e1a-44c0-4fd8-83ad-cae384467839", 00:11:28.877 "assigned_rate_limits": { 00:11:28.877 "rw_ios_per_sec": 0, 00:11:28.877 "rw_mbytes_per_sec": 0, 00:11:28.877 "r_mbytes_per_sec": 0, 00:11:28.877 "w_mbytes_per_sec": 0 00:11:28.877 }, 00:11:28.877 "claimed": false, 00:11:28.877 "zoned": false, 00:11:28.877 "supported_io_types": { 00:11:28.877 "read": true, 00:11:28.877 "write": true, 00:11:28.877 "unmap": true, 00:11:28.877 "flush": false, 00:11:28.877 "reset": true, 00:11:28.877 "nvme_admin": false, 00:11:28.877 "nvme_io": false, 00:11:28.877 "nvme_io_md": false, 00:11:28.877 "write_zeroes": true, 00:11:28.877 "zcopy": false, 00:11:28.877 "get_zone_info": false, 00:11:28.877 "zone_management": false, 00:11:28.877 "zone_append": false, 00:11:28.877 "compare": false, 00:11:28.877 "compare_and_write": false, 00:11:28.877 "abort": false, 00:11:28.877 "seek_hole": true, 00:11:28.877 "seek_data": true, 00:11:28.877 "copy": false, 00:11:28.877 "nvme_iov_md": false 00:11:28.877 }, 00:11:28.877 "driver_specific": { 00:11:28.877 "lvol": { 00:11:28.877 "lvol_store_uuid": "c1c7ab43-7c10-4f50-bcac-0415a45451c1", 00:11:28.877 "base_bdev": "aio_bdev", 00:11:28.877 "thin_provision": false, 00:11:28.877 "num_allocated_clusters": 38, 00:11:28.877 "snapshot": false, 00:11:28.877 "clone": false, 00:11:28.877 "esnap_clone": false 00:11:28.877 } 00:11:28.877 } 00:11:28.877 } 00:11:28.877 ] 00:11:28.877 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:11:28.877 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:28.877 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:11:29.136 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:11:29.136 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:29.137 06:22:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:11:29.395 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:11:29.395 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete e7ce7e1a-44c0-4fd8-83ad-cae384467839 00:11:29.653 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1c7ab43-7c10-4f50-bcac-0415a45451c1 00:11:30.218 06:22:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:11:30.218 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:11:30.476 00:11:30.476 real 0m19.347s 00:11:30.476 user 0m48.841s 00:11:30.476 sys 0m4.640s 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:11:30.476 ************************************ 00:11:30.476 END TEST lvs_grow_dirty 00:11:30.476 ************************************ 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:11:30.476 nvmf_trace.0 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:30.476 rmmod nvme_tcp 00:11:30.476 rmmod nvme_fabrics 00:11:30.476 rmmod nvme_keyring 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2002487 ']' 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2002487 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2002487 ']' 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2002487 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2002487 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2002487' 00:11:30.476 killing process with pid 2002487 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2002487 00:11:30.476 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2002487 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.735 06:22:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:33.269 00:11:33.269 real 0m42.548s 00:11:33.269 user 1m11.223s 00:11:33.269 sys 0m8.897s 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:11:33.269 ************************************ 00:11:33.269 END TEST nvmf_lvs_grow 00:11:33.269 ************************************ 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.269 ************************************ 00:11:33.269 START TEST nvmf_bdev_io_wait 00:11:33.269 ************************************ 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:11:33.269 * Looking for test storage... 00:11:33.269 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.269 --rc genhtml_branch_coverage=1 00:11:33.269 --rc genhtml_function_coverage=1 00:11:33.269 --rc genhtml_legend=1 00:11:33.269 --rc geninfo_all_blocks=1 00:11:33.269 --rc geninfo_unexecuted_blocks=1 00:11:33.269 00:11:33.269 ' 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.269 --rc genhtml_branch_coverage=1 00:11:33.269 --rc genhtml_function_coverage=1 00:11:33.269 --rc genhtml_legend=1 00:11:33.269 --rc geninfo_all_blocks=1 00:11:33.269 --rc geninfo_unexecuted_blocks=1 00:11:33.269 00:11:33.269 ' 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.269 --rc genhtml_branch_coverage=1 00:11:33.269 --rc genhtml_function_coverage=1 00:11:33.269 --rc genhtml_legend=1 00:11:33.269 --rc geninfo_all_blocks=1 00:11:33.269 --rc geninfo_unexecuted_blocks=1 00:11:33.269 00:11:33.269 ' 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:33.269 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.269 --rc genhtml_branch_coverage=1 00:11:33.269 --rc genhtml_function_coverage=1 00:11:33.269 --rc genhtml_legend=1 00:11:33.269 --rc geninfo_all_blocks=1 00:11:33.269 --rc geninfo_unexecuted_blocks=1 00:11:33.269 00:11:33.269 ' 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.269 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:33.270 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:11:33.270 06:22:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:35.179 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:35.179 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:35.179 Found net devices under 0000:09:00.0: cvl_0_0 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:35.179 Found net devices under 0000:09:00.1: cvl_0_1 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:11:35.179 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:35.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:11:35.180 00:11:35.180 --- 10.0.0.2 ping statistics --- 00:11:35.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.180 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:11:35.180 00:11:35.180 --- 10.0.0.1 ping statistics --- 00:11:35.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.180 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2005026 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2005026 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2005026 ']' 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:35.180 06:22:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.439 [2024-11-20 06:22:07.023868] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:11:35.439 [2024-11-20 06:22:07.023961] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.439 [2024-11-20 06:22:07.093332] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.439 [2024-11-20 06:22:07.149685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.439 [2024-11-20 06:22:07.149740] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.439 [2024-11-20 06:22:07.149768] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.439 [2024-11-20 06:22:07.149779] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.439 [2024-11-20 06:22:07.149788] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.439 [2024-11-20 06:22:07.151361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.439 [2024-11-20 06:22:07.151428] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.439 [2024-11-20 06:22:07.151491] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.439 [2024-11-20 06:22:07.151495] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.439 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:35.439 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:11:35.439 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:35.439 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:35.439 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.439 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.697 [2024-11-20 06:22:07.354734] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.697 Malloc0 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:35.697 [2024-11-20 06:22:07.406750] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2005164 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2005167 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:35.697 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.698 { 00:11:35.698 "params": { 00:11:35.698 "name": "Nvme$subsystem", 00:11:35.698 "trtype": "$TEST_TRANSPORT", 00:11:35.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.698 "adrfam": "ipv4", 00:11:35.698 "trsvcid": "$NVMF_PORT", 00:11:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.698 "hdgst": ${hdgst:-false}, 00:11:35.698 "ddgst": ${ddgst:-false} 00:11:35.698 }, 00:11:35.698 "method": "bdev_nvme_attach_controller" 00:11:35.698 } 00:11:35.698 EOF 00:11:35.698 )") 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2005170 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.698 { 00:11:35.698 "params": { 00:11:35.698 "name": "Nvme$subsystem", 00:11:35.698 "trtype": "$TEST_TRANSPORT", 00:11:35.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.698 "adrfam": "ipv4", 00:11:35.698 "trsvcid": "$NVMF_PORT", 00:11:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.698 "hdgst": ${hdgst:-false}, 00:11:35.698 "ddgst": ${ddgst:-false} 00:11:35.698 }, 00:11:35.698 "method": "bdev_nvme_attach_controller" 00:11:35.698 } 00:11:35.698 EOF 00:11:35.698 )") 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2005173 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.698 { 00:11:35.698 "params": { 00:11:35.698 "name": "Nvme$subsystem", 00:11:35.698 "trtype": "$TEST_TRANSPORT", 00:11:35.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.698 "adrfam": "ipv4", 00:11:35.698 "trsvcid": "$NVMF_PORT", 00:11:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.698 "hdgst": ${hdgst:-false}, 00:11:35.698 "ddgst": ${ddgst:-false} 00:11:35.698 }, 00:11:35.698 "method": "bdev_nvme_attach_controller" 00:11:35.698 } 00:11:35.698 EOF 00:11:35.698 )") 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:11:35.698 { 00:11:35.698 "params": { 00:11:35.698 "name": "Nvme$subsystem", 00:11:35.698 "trtype": "$TEST_TRANSPORT", 00:11:35.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:35.698 "adrfam": "ipv4", 00:11:35.698 "trsvcid": "$NVMF_PORT", 00:11:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:35.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:35.698 "hdgst": ${hdgst:-false}, 00:11:35.698 "ddgst": ${ddgst:-false} 00:11:35.698 }, 00:11:35.698 "method": "bdev_nvme_attach_controller" 00:11:35.698 } 00:11:35.698 EOF 00:11:35.698 )") 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2005164 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.698 "params": { 00:11:35.698 "name": "Nvme1", 00:11:35.698 "trtype": "tcp", 00:11:35.698 "traddr": "10.0.0.2", 00:11:35.698 "adrfam": "ipv4", 00:11:35.698 "trsvcid": "4420", 00:11:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.698 "hdgst": false, 00:11:35.698 "ddgst": false 00:11:35.698 }, 00:11:35.698 "method": "bdev_nvme_attach_controller" 00:11:35.698 }' 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.698 "params": { 00:11:35.698 "name": "Nvme1", 00:11:35.698 "trtype": "tcp", 00:11:35.698 "traddr": "10.0.0.2", 00:11:35.698 "adrfam": "ipv4", 00:11:35.698 "trsvcid": "4420", 00:11:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.698 "hdgst": false, 00:11:35.698 "ddgst": false 00:11:35.698 }, 00:11:35.698 "method": "bdev_nvme_attach_controller" 00:11:35.698 }' 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.698 "params": { 00:11:35.698 "name": "Nvme1", 00:11:35.698 "trtype": "tcp", 00:11:35.698 "traddr": "10.0.0.2", 00:11:35.698 "adrfam": "ipv4", 00:11:35.698 "trsvcid": "4420", 00:11:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.698 "hdgst": false, 00:11:35.698 "ddgst": false 00:11:35.698 }, 00:11:35.698 "method": "bdev_nvme_attach_controller" 00:11:35.698 }' 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:11:35.698 06:22:07 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:11:35.698 "params": { 00:11:35.698 "name": "Nvme1", 00:11:35.698 "trtype": "tcp", 00:11:35.698 "traddr": "10.0.0.2", 00:11:35.698 "adrfam": "ipv4", 00:11:35.698 "trsvcid": "4420", 00:11:35.698 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:35.698 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:35.698 "hdgst": false, 00:11:35.698 "ddgst": false 00:11:35.698 }, 00:11:35.698 "method": "bdev_nvme_attach_controller" 00:11:35.698 }' 00:11:35.698 [2024-11-20 06:22:07.457418] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:11:35.698 [2024-11-20 06:22:07.457418] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:11:35.698 [2024-11-20 06:22:07.457514] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 06:22:07.457515] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:11:35.698 --proc-type=auto ] 00:11:35.698 [2024-11-20 06:22:07.458018] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:11:35.698 [2024-11-20 06:22:07.458018] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:11:35.698 [2024-11-20 06:22:07.458091] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 06:22:07.458091] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:11:35.698 --proc-type=auto ] 00:11:35.956 [2024-11-20 06:22:07.647520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:35.956 [2024-11-20 06:22:07.703468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:11:35.956 [2024-11-20 06:22:07.753088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.214 [2024-11-20 06:22:07.811000] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:11:36.214 [2024-11-20 06:22:07.830068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.214 [2024-11-20 06:22:07.881501] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:11:36.214 [2024-11-20 06:22:07.909233] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.214 [2024-11-20 06:22:07.960609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:11:36.214 Running I/O for 1 seconds... 00:11:36.472 Running I/O for 1 seconds... 00:11:36.472 Running I/O for 1 seconds... 00:11:36.472 Running I/O for 1 seconds... 00:11:37.407 11166.00 IOPS, 43.62 MiB/s 00:11:37.407 Latency(us) 00:11:37.407 [2024-11-20T05:22:09.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.407 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:11:37.407 Nvme1n1 : 1.01 11224.58 43.85 0.00 0.00 11360.06 5485.61 20486.07 00:11:37.407 [2024-11-20T05:22:09.243Z] =================================================================================================================== 00:11:37.407 [2024-11-20T05:22:09.243Z] Total : 11224.58 43.85 0.00 0.00 11360.06 5485.61 20486.07 00:11:37.407 5437.00 IOPS, 21.24 MiB/s 00:11:37.407 Latency(us) 00:11:37.407 [2024-11-20T05:22:09.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.407 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:11:37.407 Nvme1n1 : 1.02 5467.56 21.36 0.00 0.00 23196.42 8883.77 35340.89 00:11:37.407 [2024-11-20T05:22:09.243Z] =================================================================================================================== 00:11:37.407 [2024-11-20T05:22:09.243Z] Total : 5467.56 21.36 0.00 0.00 23196.42 8883.77 35340.89 00:11:37.407 187096.00 IOPS, 730.84 MiB/s 00:11:37.407 Latency(us) 00:11:37.407 [2024-11-20T05:22:09.243Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.407 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:11:37.407 Nvme1n1 : 1.00 186746.94 729.48 0.00 0.00 681.76 286.72 1844.72 00:11:37.408 [2024-11-20T05:22:09.244Z] =================================================================================================================== 00:11:37.408 [2024-11-20T05:22:09.244Z] Total : 186746.94 729.48 0.00 0.00 681.76 286.72 1844.72 00:11:37.408 5348.00 IOPS, 20.89 MiB/s [2024-11-20T05:22:09.244Z] 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2005167 00:11:37.408 00:11:37.408 Latency(us) 00:11:37.408 [2024-11-20T05:22:09.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:37.408 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:11:37.408 Nvme1n1 : 1.01 5432.91 21.22 0.00 0.00 23454.61 7039.05 50875.35 00:11:37.408 [2024-11-20T05:22:09.244Z] =================================================================================================================== 00:11:37.408 [2024-11-20T05:22:09.244Z] Total : 5432.91 21.22 0.00 0.00 23454.61 7039.05 50875.35 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2005170 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2005173 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:37.665 rmmod nvme_tcp 00:11:37.665 rmmod nvme_fabrics 00:11:37.665 rmmod nvme_keyring 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2005026 ']' 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2005026 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2005026 ']' 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2005026 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:37.665 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2005026 00:11:37.924 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:37.924 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:37.924 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2005026' 00:11:37.924 killing process with pid 2005026 00:11:37.924 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2005026 00:11:37.924 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2005026 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:37.925 06:22:09 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.461 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:40.461 00:11:40.461 real 0m7.215s 00:11:40.461 user 0m16.037s 00:11:40.461 sys 0m3.471s 00:11:40.461 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:11:40.462 ************************************ 00:11:40.462 END TEST nvmf_bdev_io_wait 00:11:40.462 ************************************ 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:40.462 ************************************ 00:11:40.462 START TEST nvmf_queue_depth 00:11:40.462 ************************************ 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:11:40.462 * Looking for test storage... 00:11:40.462 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:40.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.462 --rc genhtml_branch_coverage=1 00:11:40.462 --rc genhtml_function_coverage=1 00:11:40.462 --rc genhtml_legend=1 00:11:40.462 --rc geninfo_all_blocks=1 00:11:40.462 --rc geninfo_unexecuted_blocks=1 00:11:40.462 00:11:40.462 ' 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:40.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.462 --rc genhtml_branch_coverage=1 00:11:40.462 --rc genhtml_function_coverage=1 00:11:40.462 --rc genhtml_legend=1 00:11:40.462 --rc geninfo_all_blocks=1 00:11:40.462 --rc geninfo_unexecuted_blocks=1 00:11:40.462 00:11:40.462 ' 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:40.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.462 --rc genhtml_branch_coverage=1 00:11:40.462 --rc genhtml_function_coverage=1 00:11:40.462 --rc genhtml_legend=1 00:11:40.462 --rc geninfo_all_blocks=1 00:11:40.462 --rc geninfo_unexecuted_blocks=1 00:11:40.462 00:11:40.462 ' 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:40.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:40.462 --rc genhtml_branch_coverage=1 00:11:40.462 --rc genhtml_function_coverage=1 00:11:40.462 --rc genhtml_legend=1 00:11:40.462 --rc geninfo_all_blocks=1 00:11:40.462 --rc geninfo_unexecuted_blocks=1 00:11:40.462 00:11:40.462 ' 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:40.462 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:40.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:11:40.463 06:22:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:11:42.995 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:42.996 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:42.996 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:42.996 Found net devices under 0000:09:00.0: cvl_0_0 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:42.996 Found net devices under 0000:09:00.1: cvl_0_1 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:42.996 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:42.997 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:42.997 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.258 ms 00:11:42.997 00:11:42.997 --- 10.0.0.2 ping statistics --- 00:11:42.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.997 rtt min/avg/max/mdev = 0.258/0.258/0.258/0.000 ms 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:42.997 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:42.997 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:11:42.997 00:11:42.997 --- 10.0.0.1 ping statistics --- 00:11:42.997 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:42.997 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2007408 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2007408 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2007408 ']' 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:42.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.997 [2024-11-20 06:22:14.440369] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:11:42.997 [2024-11-20 06:22:14.440463] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:42.997 [2024-11-20 06:22:14.517205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:42.997 [2024-11-20 06:22:14.575423] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:42.997 [2024-11-20 06:22:14.575480] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:42.997 [2024-11-20 06:22:14.575509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:42.997 [2024-11-20 06:22:14.575521] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:42.997 [2024-11-20 06:22:14.575531] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:42.997 [2024-11-20 06:22:14.576184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.997 [2024-11-20 06:22:14.727735] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.997 Malloc0 00:11:42.997 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.998 [2024-11-20 06:22:14.777172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2007433 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2007433 /var/tmp/bdevperf.sock 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2007433 ']' 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:11:42.998 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:11:42.998 06:22:14 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:42.998 [2024-11-20 06:22:14.823746] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:11:42.998 [2024-11-20 06:22:14.823819] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2007433 ] 00:11:43.256 [2024-11-20 06:22:14.889030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.256 [2024-11-20 06:22:14.946820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:43.513 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:11:43.513 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:11:43.513 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:11:43.513 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:43.514 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:43.514 NVMe0n1 00:11:43.514 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:43.514 06:22:15 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:11:43.514 Running I/O for 10 seconds... 00:11:45.819 8192.00 IOPS, 32.00 MiB/s [2024-11-20T05:22:18.587Z] 8195.00 IOPS, 32.01 MiB/s [2024-11-20T05:22:19.519Z] 8299.33 IOPS, 32.42 MiB/s [2024-11-20T05:22:20.454Z] 8415.75 IOPS, 32.87 MiB/s [2024-11-20T05:22:21.387Z] 8395.20 IOPS, 32.79 MiB/s [2024-11-20T05:22:22.401Z] 8404.83 IOPS, 32.83 MiB/s [2024-11-20T05:22:23.360Z] 8471.14 IOPS, 33.09 MiB/s [2024-11-20T05:22:24.734Z] 8444.38 IOPS, 32.99 MiB/s [2024-11-20T05:22:25.667Z] 8453.89 IOPS, 33.02 MiB/s [2024-11-20T05:22:25.667Z] 8487.80 IOPS, 33.16 MiB/s 00:11:53.831 Latency(us) 00:11:53.831 [2024-11-20T05:22:25.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.831 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:11:53.831 Verification LBA range: start 0x0 length 0x4000 00:11:53.831 NVMe0n1 : 10.10 8498.92 33.20 0.00 0.00 119994.29 21165.70 70293.43 00:11:53.831 [2024-11-20T05:22:25.667Z] =================================================================================================================== 00:11:53.831 [2024-11-20T05:22:25.667Z] Total : 8498.92 33.20 0.00 0.00 119994.29 21165.70 70293.43 00:11:53.831 { 00:11:53.831 "results": [ 00:11:53.831 { 00:11:53.831 "job": "NVMe0n1", 00:11:53.831 "core_mask": "0x1", 00:11:53.831 "workload": "verify", 00:11:53.831 "status": "finished", 00:11:53.831 "verify_range": { 00:11:53.831 "start": 0, 00:11:53.831 "length": 16384 00:11:53.831 }, 00:11:53.831 "queue_depth": 1024, 00:11:53.831 "io_size": 4096, 00:11:53.831 "runtime": 10.103642, 00:11:53.831 "iops": 8498.915539564841, 00:11:53.831 "mibps": 33.19888882642516, 00:11:53.831 "io_failed": 0, 00:11:53.831 "io_timeout": 0, 00:11:53.831 "avg_latency_us": 119994.28667148013, 00:11:53.831 "min_latency_us": 21165.70074074074, 00:11:53.831 "max_latency_us": 70293.42814814814 00:11:53.831 } 00:11:53.831 ], 00:11:53.831 "core_count": 1 00:11:53.831 } 00:11:53.831 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2007433 00:11:53.832 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2007433 ']' 00:11:53.832 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2007433 00:11:53.832 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:53.832 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:53.832 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2007433 00:11:53.832 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:11:53.832 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:11:53.832 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2007433' 00:11:53.832 killing process with pid 2007433 00:11:53.832 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2007433 00:11:53.832 Received shutdown signal, test time was about 10.000000 seconds 00:11:53.832 00:11:53.832 Latency(us) 00:11:53.832 [2024-11-20T05:22:25.668Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:53.832 [2024-11-20T05:22:25.668Z] =================================================================================================================== 00:11:53.832 [2024-11-20T05:22:25.668Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:11:53.832 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2007433 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:54.090 rmmod nvme_tcp 00:11:54.090 rmmod nvme_fabrics 00:11:54.090 rmmod nvme_keyring 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2007408 ']' 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2007408 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2007408 ']' 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2007408 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2007408 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2007408' 00:11:54.090 killing process with pid 2007408 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2007408 00:11:54.090 06:22:25 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2007408 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:54.349 06:22:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.309 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:56.309 00:11:56.309 real 0m16.261s 00:11:56.309 user 0m21.769s 00:11:56.309 sys 0m3.699s 00:11:56.309 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:11:56.309 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:11:56.309 ************************************ 00:11:56.309 END TEST nvmf_queue_depth 00:11:56.309 ************************************ 00:11:56.309 06:22:28 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:56.309 06:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:11:56.309 06:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:11:56.309 06:22:28 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:56.309 ************************************ 00:11:56.309 START TEST nvmf_target_multipath 00:11:56.309 ************************************ 00:11:56.309 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:11:56.567 * Looking for test storage... 00:11:56.567 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:11:56.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.567 --rc genhtml_branch_coverage=1 00:11:56.567 --rc genhtml_function_coverage=1 00:11:56.567 --rc genhtml_legend=1 00:11:56.567 --rc geninfo_all_blocks=1 00:11:56.567 --rc geninfo_unexecuted_blocks=1 00:11:56.567 00:11:56.567 ' 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:11:56.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.567 --rc genhtml_branch_coverage=1 00:11:56.567 --rc genhtml_function_coverage=1 00:11:56.567 --rc genhtml_legend=1 00:11:56.567 --rc geninfo_all_blocks=1 00:11:56.567 --rc geninfo_unexecuted_blocks=1 00:11:56.567 00:11:56.567 ' 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:11:56.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.567 --rc genhtml_branch_coverage=1 00:11:56.567 --rc genhtml_function_coverage=1 00:11:56.567 --rc genhtml_legend=1 00:11:56.567 --rc geninfo_all_blocks=1 00:11:56.567 --rc geninfo_unexecuted_blocks=1 00:11:56.567 00:11:56.567 ' 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:11:56.567 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:56.567 --rc genhtml_branch_coverage=1 00:11:56.567 --rc genhtml_function_coverage=1 00:11:56.567 --rc genhtml_legend=1 00:11:56.567 --rc geninfo_all_blocks=1 00:11:56.567 --rc geninfo_unexecuted_blocks=1 00:11:56.567 00:11:56.567 ' 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:56.567 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:56.568 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:11:56.568 06:22:28 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:59.098 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:59.098 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:11:59.098 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:59.098 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:59.098 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:59.098 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:11:59.099 Found 0000:09:00.0 (0x8086 - 0x159b) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:11:59.099 Found 0000:09:00.1 (0x8086 - 0x159b) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:11:59.099 Found net devices under 0000:09:00.0: cvl_0_0 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:11:59.099 Found net devices under 0000:09:00.1: cvl_0_1 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:59.099 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:59.100 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:59.100 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:11:59.100 00:11:59.100 --- 10.0.0.2 ping statistics --- 00:11:59.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.100 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:59.100 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:59.100 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:11:59.100 00:11:59.100 --- 10.0.0.1 ping statistics --- 00:11:59.100 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:59.100 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:59.100 only one NIC for nvmf test 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:59.100 rmmod nvme_tcp 00:11:59.100 rmmod nvme_fabrics 00:11:59.100 rmmod nvme_keyring 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:59.100 06:22:30 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:01.007 00:12:01.007 real 0m4.649s 00:12:01.007 user 0m0.982s 00:12:01.007 sys 0m1.688s 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:12:01.007 ************************************ 00:12:01.007 END TEST nvmf_target_multipath 00:12:01.007 ************************************ 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:01.007 ************************************ 00:12:01.007 START TEST nvmf_zcopy 00:12:01.007 ************************************ 00:12:01.007 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:12:01.266 * Looking for test storage... 00:12:01.266 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:01.266 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:01.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.267 --rc genhtml_branch_coverage=1 00:12:01.267 --rc genhtml_function_coverage=1 00:12:01.267 --rc genhtml_legend=1 00:12:01.267 --rc geninfo_all_blocks=1 00:12:01.267 --rc geninfo_unexecuted_blocks=1 00:12:01.267 00:12:01.267 ' 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:01.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.267 --rc genhtml_branch_coverage=1 00:12:01.267 --rc genhtml_function_coverage=1 00:12:01.267 --rc genhtml_legend=1 00:12:01.267 --rc geninfo_all_blocks=1 00:12:01.267 --rc geninfo_unexecuted_blocks=1 00:12:01.267 00:12:01.267 ' 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:01.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.267 --rc genhtml_branch_coverage=1 00:12:01.267 --rc genhtml_function_coverage=1 00:12:01.267 --rc genhtml_legend=1 00:12:01.267 --rc geninfo_all_blocks=1 00:12:01.267 --rc geninfo_unexecuted_blocks=1 00:12:01.267 00:12:01.267 ' 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:01.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:01.267 --rc genhtml_branch_coverage=1 00:12:01.267 --rc genhtml_function_coverage=1 00:12:01.267 --rc genhtml_legend=1 00:12:01.267 --rc geninfo_all_blocks=1 00:12:01.267 --rc geninfo_unexecuted_blocks=1 00:12:01.267 00:12:01.267 ' 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:01.267 06:22:32 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:01.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:01.267 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:01.268 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:01.268 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:01.268 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:01.268 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:01.268 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:01.268 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:12:01.268 06:22:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:03.801 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:03.801 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:03.801 Found net devices under 0000:09:00.0: cvl_0_0 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:03.801 Found net devices under 0000:09:00.1: cvl_0_1 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:03.801 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:03.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:03.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.299 ms 00:12:03.802 00:12:03.802 --- 10.0.0.2 ping statistics --- 00:12:03.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.802 rtt min/avg/max/mdev = 0.299/0.299/0.299/0.000 ms 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:03.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:03.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:12:03.802 00:12:03.802 --- 10.0.0.1 ping statistics --- 00:12:03.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:03.802 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2012648 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2012648 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2012648 ']' 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:03.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:03.802 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:03.802 [2024-11-20 06:22:35.417940] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:12:03.802 [2024-11-20 06:22:35.418031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:03.802 [2024-11-20 06:22:35.491598] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:03.802 [2024-11-20 06:22:35.544472] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:03.802 [2024-11-20 06:22:35.544528] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:03.802 [2024-11-20 06:22:35.544555] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:03.802 [2024-11-20 06:22:35.544566] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:03.802 [2024-11-20 06:22:35.544576] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:03.802 [2024-11-20 06:22:35.545119] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.061 [2024-11-20 06:22:35.715117] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.061 [2024-11-20 06:22:35.731335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.061 malloc0 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:04.061 { 00:12:04.061 "params": { 00:12:04.061 "name": "Nvme$subsystem", 00:12:04.061 "trtype": "$TEST_TRANSPORT", 00:12:04.061 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:04.061 "adrfam": "ipv4", 00:12:04.061 "trsvcid": "$NVMF_PORT", 00:12:04.061 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:04.061 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:04.061 "hdgst": ${hdgst:-false}, 00:12:04.061 "ddgst": ${ddgst:-false} 00:12:04.061 }, 00:12:04.061 "method": "bdev_nvme_attach_controller" 00:12:04.061 } 00:12:04.061 EOF 00:12:04.061 )") 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:04.061 06:22:35 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:04.061 "params": { 00:12:04.061 "name": "Nvme1", 00:12:04.061 "trtype": "tcp", 00:12:04.061 "traddr": "10.0.0.2", 00:12:04.061 "adrfam": "ipv4", 00:12:04.061 "trsvcid": "4420", 00:12:04.061 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.061 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:04.061 "hdgst": false, 00:12:04.061 "ddgst": false 00:12:04.061 }, 00:12:04.061 "method": "bdev_nvme_attach_controller" 00:12:04.061 }' 00:12:04.061 [2024-11-20 06:22:35.818075] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:12:04.061 [2024-11-20 06:22:35.818155] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2012668 ] 00:12:04.062 [2024-11-20 06:22:35.887785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.319 [2024-11-20 06:22:35.948074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:04.578 Running I/O for 10 seconds... 00:12:06.444 5798.00 IOPS, 45.30 MiB/s [2024-11-20T05:22:39.216Z] 5835.50 IOPS, 45.59 MiB/s [2024-11-20T05:22:40.589Z] 5823.33 IOPS, 45.49 MiB/s [2024-11-20T05:22:41.523Z] 5802.00 IOPS, 45.33 MiB/s [2024-11-20T05:22:42.456Z] 5813.60 IOPS, 45.42 MiB/s [2024-11-20T05:22:43.389Z] 5823.17 IOPS, 45.49 MiB/s [2024-11-20T05:22:44.323Z] 5829.43 IOPS, 45.54 MiB/s [2024-11-20T05:22:45.256Z] 5833.12 IOPS, 45.57 MiB/s [2024-11-20T05:22:46.631Z] 5836.22 IOPS, 45.60 MiB/s [2024-11-20T05:22:46.631Z] 5833.60 IOPS, 45.58 MiB/s 00:12:14.795 Latency(us) 00:12:14.795 [2024-11-20T05:22:46.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:14.795 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:12:14.795 Verification LBA range: start 0x0 length 0x1000 00:12:14.795 Nvme1n1 : 10.02 5837.33 45.60 0.00 0.00 21870.75 4029.25 30874.74 00:12:14.795 [2024-11-20T05:22:46.631Z] =================================================================================================================== 00:12:14.795 [2024-11-20T05:22:46.631Z] Total : 5837.33 45.60 0.00 0.00 21870.75 4029.25 30874.74 00:12:14.795 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2013988 00:12:14.795 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:12:14.795 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:14.795 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:12:14.795 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:12:14.795 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:12:14.795 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:12:14.795 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:12:14.795 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:12:14.795 { 00:12:14.795 "params": { 00:12:14.795 "name": "Nvme$subsystem", 00:12:14.795 "trtype": "$TEST_TRANSPORT", 00:12:14.795 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:14.795 "adrfam": "ipv4", 00:12:14.795 "trsvcid": "$NVMF_PORT", 00:12:14.795 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:14.795 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:14.795 "hdgst": ${hdgst:-false}, 00:12:14.795 "ddgst": ${ddgst:-false} 00:12:14.795 }, 00:12:14.795 "method": "bdev_nvme_attach_controller" 00:12:14.795 } 00:12:14.795 EOF 00:12:14.795 )") 00:12:14.796 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:12:14.796 [2024-11-20 06:22:46.444440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.444481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:12:14.796 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:12:14.796 06:22:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:12:14.796 "params": { 00:12:14.796 "name": "Nvme1", 00:12:14.796 "trtype": "tcp", 00:12:14.796 "traddr": "10.0.0.2", 00:12:14.796 "adrfam": "ipv4", 00:12:14.796 "trsvcid": "4420", 00:12:14.796 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.796 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:14.796 "hdgst": false, 00:12:14.796 "ddgst": false 00:12:14.796 }, 00:12:14.796 "method": "bdev_nvme_attach_controller" 00:12:14.796 }' 00:12:14.796 [2024-11-20 06:22:46.452413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.452439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.460432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.460456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.468453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.468477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.476475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.476498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.484493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.484514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.485617] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:12:14.796 [2024-11-20 06:22:46.485687] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2013988 ] 00:12:14.796 [2024-11-20 06:22:46.492518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.492540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.500544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.500566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.508562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.508599] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.516596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.516617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.524619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.524640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.532642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.532677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.540676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.540696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.548695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.548716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.555020] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:14.796 [2024-11-20 06:22:46.556708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.556728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.564759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.564796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.572788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.572819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.580784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.580803] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.588805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.588825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.596828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.596847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.604848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.604868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.612876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.612896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.617635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.796 [2024-11-20 06:22:46.620894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.620915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:14.796 [2024-11-20 06:22:46.628926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:14.796 [2024-11-20 06:22:46.628948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.636968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.636997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.644991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.645022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.653010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.653040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.661033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.661065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.669055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.669085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.677073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.677105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.685086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.685113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.693091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.693111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.701134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.701164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.709159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.709189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.717177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.717209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.725175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.725194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.733217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.733237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.741217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.741237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.749247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.749271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.757267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.757313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.765313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.765336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.773334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.773357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.781368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.781389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.789377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.789398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.797398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.797420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.805405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.805428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.813436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.813460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.821460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.821485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.829477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.829499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.837525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.837547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.845522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.845543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.853541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.853562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.861572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.861602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.869604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.869627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.877621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.877655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.055 [2024-11-20 06:22:46.885655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.055 [2024-11-20 06:22:46.885674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:46.893678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:46.893698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:46.901683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:46.901702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:46.909710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:46.909731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:46.917725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:46.917745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:46.925743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:46.925763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:46.933782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:46.933802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:46.941801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:46.941821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:46.949816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:46.949837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:46.957835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:46.957855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:47.004515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:47.004543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:47.009991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:47.010014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:47.018011] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:47.018031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 Running I/O for 5 seconds... 00:12:15.313 [2024-11-20 06:22:47.030724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:47.030752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:47.041177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:47.041205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:47.052592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:47.052635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.313 [2024-11-20 06:22:47.063809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.313 [2024-11-20 06:22:47.063843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.314 [2024-11-20 06:22:47.074718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.314 [2024-11-20 06:22:47.074746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.314 [2024-11-20 06:22:47.087655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.314 [2024-11-20 06:22:47.087682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.314 [2024-11-20 06:22:47.098398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.314 [2024-11-20 06:22:47.098425] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.314 [2024-11-20 06:22:47.109925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.314 [2024-11-20 06:22:47.109952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.314 [2024-11-20 06:22:47.120721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.314 [2024-11-20 06:22:47.120749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.314 [2024-11-20 06:22:47.132031] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.314 [2024-11-20 06:22:47.132058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.314 [2024-11-20 06:22:47.143316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.314 [2024-11-20 06:22:47.143344] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.156083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.156110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.166440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.166469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.177131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.177158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.187806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.187833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.198629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.198656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.209175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.209204] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.220026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.220053] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.230810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.230837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.242040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.242067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.255154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.255181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.265522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.265551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.276335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.276370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.287625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.287653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.301014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.301041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.311537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.311565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.322762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.322791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.333441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.333470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.344432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.344463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.357264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.357315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.367579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.367620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.378583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.378625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.391507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.391535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.572 [2024-11-20 06:22:47.401975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.572 [2024-11-20 06:22:47.402002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.412565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.412593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.426235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.426261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.436444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.436472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.447086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.447112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.457561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.457602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.468777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.468804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.482018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.482044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.492196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.492233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.502803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.502829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.513763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.513790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.526546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.526574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.536657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.536684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.547738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.547764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.560744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.560771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.571213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.571240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.582335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.582364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.594897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.594924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.604347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.604375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.614513] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.614541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.625367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.625395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.638050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.638078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.648563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.648592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:15.831 [2024-11-20 06:22:47.659270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:15.831 [2024-11-20 06:22:47.659321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.670117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.670145] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.680910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.680937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.693435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.693462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.704989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.705018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.715032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.715059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.725763] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.725791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.736761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.736788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.749525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.749554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.759658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.759686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.770461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.770490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.783419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.783447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.793737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.793764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.804525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.804553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.089 [2024-11-20 06:22:47.816762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.089 [2024-11-20 06:22:47.816789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.090 [2024-11-20 06:22:47.826411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.090 [2024-11-20 06:22:47.826439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.090 [2024-11-20 06:22:47.837960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.090 [2024-11-20 06:22:47.837987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.090 [2024-11-20 06:22:47.850357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.090 [2024-11-20 06:22:47.850385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.090 [2024-11-20 06:22:47.860050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.090 [2024-11-20 06:22:47.860076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.090 [2024-11-20 06:22:47.871714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.090 [2024-11-20 06:22:47.871741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.090 [2024-11-20 06:22:47.884431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.090 [2024-11-20 06:22:47.884459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.090 [2024-11-20 06:22:47.894546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.090 [2024-11-20 06:22:47.894573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.090 [2024-11-20 06:22:47.905351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.090 [2024-11-20 06:22:47.905379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.090 [2024-11-20 06:22:47.917975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.090 [2024-11-20 06:22:47.918002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:47.927562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:47.927590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:47.938736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:47.938762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:47.949688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:47.949714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:47.962187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:47.962214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:47.972174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:47.972200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:47.983494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:47.983522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:47.995997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:47.996025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.005655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.005682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.017059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.017100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 11626.00 IOPS, 90.83 MiB/s [2024-11-20T05:22:48.184Z] [2024-11-20 06:22:48.027842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.027869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.038748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.038775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.051077] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.051104] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.061564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.061593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.072331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.072360] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.085482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.085509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.095476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.095503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.106374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.106401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.117001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.117035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.128053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.128079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.140640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.140667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.151245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.151273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.162137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.162164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.348 [2024-11-20 06:22:48.174794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.348 [2024-11-20 06:22:48.174822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.606 [2024-11-20 06:22:48.184893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.606 [2024-11-20 06:22:48.184920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.606 [2024-11-20 06:22:48.195634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.606 [2024-11-20 06:22:48.195659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.606 [2024-11-20 06:22:48.206411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.606 [2024-11-20 06:22:48.206438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.606 [2024-11-20 06:22:48.217261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.606 [2024-11-20 06:22:48.217312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.606 [2024-11-20 06:22:48.228210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.606 [2024-11-20 06:22:48.228251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.606 [2024-11-20 06:22:48.238983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.606 [2024-11-20 06:22:48.239010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.606 [2024-11-20 06:22:48.251429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.251456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.261911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.261938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.272493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.272521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.283270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.283321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.294269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.294324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.307129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.307156] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.317259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.317312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.328210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.328244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.341068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.341095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.351211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.351238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.362058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.362089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.373056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.373083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.384269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.384296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.397055] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.397082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.408567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.408597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.417566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.417594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.429436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.429463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.607 [2024-11-20 06:22:48.440197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.607 [2024-11-20 06:22:48.440226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.451252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.451278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.463791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.463818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.473558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.473587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.484965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.484991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.496338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.496365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.507209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.507236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.520941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.520968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.531350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.531378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.542507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.542557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.556202] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.556229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.567014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.567042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.578065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.578092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.590990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.591017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.600942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.600969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.612495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.612523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.623270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.623321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.634394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.634422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.645416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.645443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.656660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.656688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.669724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.669751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.680243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.680269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:16.865 [2024-11-20 06:22:48.691376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:16.865 [2024-11-20 06:22:48.691405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.704160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.704187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.714529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.714557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.725484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.725512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.738439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.738467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.749131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.749160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.760062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.760094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.772508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.772536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.782576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.782603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.793207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.793235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.804252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.804280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.817297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.817333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.827514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.827542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.838526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.838553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.849619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.849647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.860602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.860631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.871600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.871628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.882714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.882741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.895727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.895755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.906242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.906268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.917650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.917677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.929274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.929325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.940174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.940200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.124 [2024-11-20 06:22:48.952937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.124 [2024-11-20 06:22:48.952964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:48.963104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:48.963132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:48.973897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:48.973923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:48.984679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:48.984706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:48.995492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:48.995521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.006389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.006427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.017676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.017703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 11661.50 IOPS, 91.11 MiB/s [2024-11-20T05:22:49.219Z] [2024-11-20 06:22:49.030894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.030921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.041507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.041535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.052644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.052672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.063153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.063180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.073801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.073828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.087060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.087087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.097374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.097402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.108395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.108423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.120734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.120762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.130498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.130526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.141731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.141758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.152395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.152422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.163204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.163231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.175647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.175674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.185834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.185861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.196445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.196473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.383 [2024-11-20 06:22:49.207060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.383 [2024-11-20 06:22:49.207087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.217829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.217857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.230487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.230515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.240591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.240619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.251911] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.251939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.265008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.265035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.275714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.275740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.286457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.286485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.299002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.299029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.309112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.309139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.319798] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.319827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.330962] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.330989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.342172] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.342199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.354732] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.354759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.365208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.365235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.376187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.376214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.388692] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.388719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.398528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.398556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.409792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.409819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.420600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.420641] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.431699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.431727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.443094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.443122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.454194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.454221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.642 [2024-11-20 06:22:49.464960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.642 [2024-11-20 06:22:49.464987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.475906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.475934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.488080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.488122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.498539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.498567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.509755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.509783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.522851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.522879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.533218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.533246] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.543913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.543951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.554662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.554689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.567117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.567144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.576693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.576719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.588068] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.588094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.599116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.599150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.609682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.609709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.620463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.620490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.631352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.631379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.642428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.642457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.655083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.900 [2024-11-20 06:22:49.655110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.900 [2024-11-20 06:22:49.665748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.901 [2024-11-20 06:22:49.665775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.901 [2024-11-20 06:22:49.676656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.901 [2024-11-20 06:22:49.676683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.901 [2024-11-20 06:22:49.686621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.901 [2024-11-20 06:22:49.686648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.901 [2024-11-20 06:22:49.697442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.901 [2024-11-20 06:22:49.697470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.901 [2024-11-20 06:22:49.708402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.901 [2024-11-20 06:22:49.708429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.901 [2024-11-20 06:22:49.719682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.901 [2024-11-20 06:22:49.719709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:17.901 [2024-11-20 06:22:49.730171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:17.901 [2024-11-20 06:22:49.730198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.740900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.740926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.754029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.754056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.764592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.764633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.775433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.775461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.786566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.786593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.797433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.797460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.809929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.809964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.820245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.820271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.830761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.830789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.841787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.841814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.852919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.852945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.863386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.863415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.874404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.874442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.887378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.887406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.897464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.897491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.908514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.908542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.919478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.919506] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.930547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.930574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.941314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.941350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.951922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.951949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.964182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.964208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.974598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.974640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.159 [2024-11-20 06:22:49.985161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.159 [2024-11-20 06:22:49.985187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:49.996205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:49.996233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.007824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.007861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.019001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.019043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 11682.67 IOPS, 91.27 MiB/s [2024-11-20T05:22:50.253Z] [2024-11-20 06:22:50.032384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.032422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.042842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.042871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.053292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.053329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.064520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.064548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.075785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.075812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.086268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.086318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.096959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.097002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.107500] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.107528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.118342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.118370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.130750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.130776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.140980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.141006] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.151949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.151977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.163298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.163333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.174635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.174662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.187467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.187496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.198022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.198049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.209059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.209088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.221587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.221615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.233343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.233370] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.417 [2024-11-20 06:22:50.242435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.417 [2024-11-20 06:22:50.242462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.253866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.253894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.266747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.266774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.277120] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.277147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.287990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.288017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.300736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.300763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.310877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.310904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.321474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.321501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.332113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.332140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.342823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.342849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.355471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.355498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.365693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.365719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.377036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.377063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.388161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.388187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.399087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.399114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.410073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.410100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.421053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.421079] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.432190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.432217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.445632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.445659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.456423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.456451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.467519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.467546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.478023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.478050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.488721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.488749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.674 [2024-11-20 06:22:50.499480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.674 [2024-11-20 06:22:50.499508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.510574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.510602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.523315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.523342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.533668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.533696] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.544613] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.544639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.555821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.555848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.566836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.566862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.580071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.580099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.590966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.590994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.601446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.601475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.612647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.612674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.625462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.625490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.635787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.635815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.646566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.646596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.657444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.657472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.668010] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.668037] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.679045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.679073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.690065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.690092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.700867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.700893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.711997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.712025] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.725136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.725163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.737362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.737390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.747150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.747176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.759088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.759114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.769310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.769338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:18.968 [2024-11-20 06:22:50.780043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:18.968 [2024-11-20 06:22:50.780070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.792314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.792342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.802528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.802555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.813385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.813413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.824058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.824086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.834746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.834773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.845632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.845659] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.856704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.856739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.869452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.869480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.879757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.879784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.890460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.890488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.901391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.901419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.912397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.912424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.923347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.923375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.936178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.936205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.946450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.946477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.957174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.957201] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.967924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.967951] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.978876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.978902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:50.992100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:50.992126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:51.002547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:51.002575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:51.012920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:51.012963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:51.023838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:51.023864] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 11679.25 IOPS, 91.24 MiB/s [2024-11-20T05:22:51.087Z] [2024-11-20 06:22:51.034446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:51.034474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:51.045030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:51.045056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:51.055983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:51.056010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:51.068760] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:51.068797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.251 [2024-11-20 06:22:51.078842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.251 [2024-11-20 06:22:51.078870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.089680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.089708] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.102521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.102549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.113189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.113216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.123468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.123495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.133971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.133999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.145122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.145148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.158023] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.158051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.168484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.168512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.179184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.179210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.189675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.189701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.200535] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.200563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.213315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.213342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.223656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.523 [2024-11-20 06:22:51.223684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.523 [2024-11-20 06:22:51.234583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.234625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.524 [2024-11-20 06:22:51.245401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.245429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.524 [2024-11-20 06:22:51.256539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.256567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.524 [2024-11-20 06:22:51.269395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.269424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.524 [2024-11-20 06:22:51.279093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.279129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.524 [2024-11-20 06:22:51.290767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.290794] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.524 [2024-11-20 06:22:51.303460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.303488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.524 [2024-11-20 06:22:51.313582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.313633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.524 [2024-11-20 06:22:51.324716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.324743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.524 [2024-11-20 06:22:51.335680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.335707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.524 [2024-11-20 06:22:51.346640] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.524 [2024-11-20 06:22:51.346668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.359244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.359271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.369063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.369089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.380034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.380061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.392801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.392827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.404678] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.404705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.414102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.414130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.425599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.425642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.436705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.436733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.447520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.447548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.462126] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.462154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.473065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.473092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.483729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.483756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.494873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.494908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.507986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.508013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.518739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.518767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.529536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.529564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.540389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.540417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.550784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.550812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.561516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.561544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.572688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.572716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.585361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.585389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.595895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.595922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:19.783 [2024-11-20 06:22:51.606801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:19.783 [2024-11-20 06:22:51.606827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.619319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.619347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.631013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.631056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.640412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.640440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.652451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.652480] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.663019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.663045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.673832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.673875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.686384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.686412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.696773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.696800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.707448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.707476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.720256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.720284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.730277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.730315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.741277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.741316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.758787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.758815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.769080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.769107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.780093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.780120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.790517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.790544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.801433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.801462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.812064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.812092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.822952] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.822980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.835699] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.835727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.846000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.846027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.041 [2024-11-20 06:22:51.857377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.041 [2024-11-20 06:22:51.857408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.042 [2024-11-20 06:22:51.870094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.042 [2024-11-20 06:22:51.870121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.880775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.880801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.891937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.891963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.904484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.904512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.914619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.914645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.925651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.925678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.938561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.938589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.948916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.948942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.959438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.959465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.970373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.970400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.983462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.983490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:51.993693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:51.993720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:52.004634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:52.004661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:52.017339] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:52.017367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.300 [2024-11-20 06:22:52.027583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.300 [2024-11-20 06:22:52.027611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 11691.40 IOPS, 91.34 MiB/s [2024-11-20T05:22:52.137Z] [2024-11-20 06:22:52.037355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.037382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 00:12:20.301 Latency(us) 00:12:20.301 [2024-11-20T05:22:52.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:20.301 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:12:20.301 Nvme1n1 : 5.01 11691.73 91.34 0.00 0.00 10933.59 4563.25 19223.89 00:12:20.301 [2024-11-20T05:22:52.137Z] =================================================================================================================== 00:12:20.301 [2024-11-20T05:22:52.137Z] Total : 11691.73 91.34 0.00 0.00 10933.59 4563.25 19223.89 00:12:20.301 [2024-11-20 06:22:52.042435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.042460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.050455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.050481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.058467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.058490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.066541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.066583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.074563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.074621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.082585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.082624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.090608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.090649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.098626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.098666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.106641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.106681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.114670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.114712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.122690] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.122731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.301 [2024-11-20 06:22:52.130718] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.301 [2024-11-20 06:22:52.130760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.138745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.138785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.146769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.146813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.154788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.154831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.162810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.162853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.170834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.170876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.178851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.178892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.186871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.186914] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.194841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.194862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.202859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.202879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.210884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.210903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.218904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.218924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.226949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.226987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.235005] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.235045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.243019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.243060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.250993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.251015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.259012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.259032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 [2024-11-20 06:22:52.267034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:12:20.559 [2024-11-20 06:22:52.267054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:20.559 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2013988) - No such process 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2013988 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:20.559 delay0 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:20.559 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:20.560 06:22:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:12:20.817 [2024-11-20 06:22:52.429480] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:12:27.370 Initializing NVMe Controllers 00:12:27.370 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:27.370 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:27.370 Initialization complete. Launching workers. 00:12:27.370 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 93 00:12:27.370 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 380, failed to submit 33 00:12:27.370 success 194, unsuccessful 186, failed 0 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:27.370 rmmod nvme_tcp 00:12:27.370 rmmod nvme_fabrics 00:12:27.370 rmmod nvme_keyring 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:12:27.370 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2012648 ']' 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2012648 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2012648 ']' 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2012648 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2012648 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2012648' 00:12:27.371 killing process with pid 2012648 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2012648 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2012648 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:27.371 06:22:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.278 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:29.278 00:12:29.278 real 0m28.186s 00:12:29.278 user 0m41.701s 00:12:29.278 sys 0m8.209s 00:12:29.278 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:29.278 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:12:29.278 ************************************ 00:12:29.278 END TEST nvmf_zcopy 00:12:29.278 ************************************ 00:12:29.278 06:23:01 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:29.278 06:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:29.278 06:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:29.278 06:23:01 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:29.278 ************************************ 00:12:29.278 START TEST nvmf_nmic 00:12:29.278 ************************************ 00:12:29.278 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:12:29.537 * Looking for test storage... 00:12:29.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:29.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.537 --rc genhtml_branch_coverage=1 00:12:29.537 --rc genhtml_function_coverage=1 00:12:29.537 --rc genhtml_legend=1 00:12:29.537 --rc geninfo_all_blocks=1 00:12:29.537 --rc geninfo_unexecuted_blocks=1 00:12:29.537 00:12:29.537 ' 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:29.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.537 --rc genhtml_branch_coverage=1 00:12:29.537 --rc genhtml_function_coverage=1 00:12:29.537 --rc genhtml_legend=1 00:12:29.537 --rc geninfo_all_blocks=1 00:12:29.537 --rc geninfo_unexecuted_blocks=1 00:12:29.537 00:12:29.537 ' 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:29.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.537 --rc genhtml_branch_coverage=1 00:12:29.537 --rc genhtml_function_coverage=1 00:12:29.537 --rc genhtml_legend=1 00:12:29.537 --rc geninfo_all_blocks=1 00:12:29.537 --rc geninfo_unexecuted_blocks=1 00:12:29.537 00:12:29.537 ' 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:29.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:29.537 --rc genhtml_branch_coverage=1 00:12:29.537 --rc genhtml_function_coverage=1 00:12:29.537 --rc genhtml_legend=1 00:12:29.537 --rc geninfo_all_blocks=1 00:12:29.537 --rc geninfo_unexecuted_blocks=1 00:12:29.537 00:12:29.537 ' 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:29.537 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:29.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:12:29.538 06:23:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.078 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.078 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:12:32.078 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:32.078 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:32.078 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:32.078 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:32.078 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:32.078 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:12:32.078 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:32.078 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:32.079 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:32.079 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:32.079 Found net devices under 0000:09:00.0: cvl_0_0 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:32.079 Found net devices under 0000:09:00.1: cvl_0_1 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:32.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.315 ms 00:12:32.079 00:12:32.079 --- 10.0.0.2 ping statistics --- 00:12:32.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.079 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:12:32.079 00:12:32.079 --- 10.0.0.1 ping statistics --- 00:12:32.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.079 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:32.079 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2017393 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2017393 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2017393 ']' 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.080 [2024-11-20 06:23:03.569117] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:12:32.080 [2024-11-20 06:23:03.569209] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.080 [2024-11-20 06:23:03.644085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.080 [2024-11-20 06:23:03.705753] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.080 [2024-11-20 06:23:03.705806] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.080 [2024-11-20 06:23:03.705834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.080 [2024-11-20 06:23:03.705845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.080 [2024-11-20 06:23:03.705855] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.080 [2024-11-20 06:23:03.707457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.080 [2024-11-20 06:23:03.707515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.080 [2024-11-20 06:23:03.707582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.080 [2024-11-20 06:23:03.707585] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.080 [2024-11-20 06:23:03.849887] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.080 Malloc0 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.080 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.337 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.337 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.337 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.337 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.337 [2024-11-20 06:23:03.921505] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:12:32.338 test case1: single bdev can't be used in multiple subsystems 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.338 [2024-11-20 06:23:03.945392] bdev.c:8321:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:12:32.338 [2024-11-20 06:23:03.945422] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:12:32.338 [2024-11-20 06:23:03.945437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:32.338 request: 00:12:32.338 { 00:12:32.338 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:32.338 "namespace": { 00:12:32.338 "bdev_name": "Malloc0", 00:12:32.338 "no_auto_visible": false 00:12:32.338 }, 00:12:32.338 "method": "nvmf_subsystem_add_ns", 00:12:32.338 "req_id": 1 00:12:32.338 } 00:12:32.338 Got JSON-RPC error response 00:12:32.338 response: 00:12:32.338 { 00:12:32.338 "code": -32602, 00:12:32.338 "message": "Invalid parameters" 00:12:32.338 } 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:12:32.338 Adding namespace failed - expected result. 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:12:32.338 test case2: host connect to nvmf target in multiple paths 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:32.338 [2024-11-20 06:23:03.953509] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:32.338 06:23:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:32.904 06:23:04 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:12:33.470 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.470 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:12:33.470 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.470 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:12:33.470 06:23:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:12:35.997 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:35.997 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:35.997 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.997 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:12:35.997 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.997 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:12:35.997 06:23:07 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:35.997 [global] 00:12:35.997 thread=1 00:12:35.997 invalidate=1 00:12:35.997 rw=write 00:12:35.997 time_based=1 00:12:35.997 runtime=1 00:12:35.997 ioengine=libaio 00:12:35.997 direct=1 00:12:35.997 bs=4096 00:12:35.997 iodepth=1 00:12:35.997 norandommap=0 00:12:35.997 numjobs=1 00:12:35.997 00:12:35.997 verify_dump=1 00:12:35.997 verify_backlog=512 00:12:35.997 verify_state_save=0 00:12:35.997 do_verify=1 00:12:35.997 verify=crc32c-intel 00:12:35.997 [job0] 00:12:35.997 filename=/dev/nvme0n1 00:12:35.997 Could not set queue depth (nvme0n1) 00:12:35.997 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:35.997 fio-3.35 00:12:35.997 Starting 1 thread 00:12:36.931 00:12:36.931 job0: (groupid=0, jobs=1): err= 0: pid=2017909: Wed Nov 20 06:23:08 2024 00:12:36.931 read: IOPS=21, BW=87.3KiB/s (89.4kB/s)(88.0KiB/1008msec) 00:12:36.931 slat (nsec): min=6903, max=35580, avg=23292.95, stdev=9985.27 00:12:36.931 clat (usec): min=40949, max=42053, avg=41925.36, stdev=221.20 00:12:36.931 lat (usec): min=40982, max=42070, avg=41948.65, stdev=218.61 00:12:36.931 clat percentiles (usec): 00:12:36.931 | 1.00th=[41157], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:12:36.931 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:36.931 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:36.931 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:36.931 | 99.99th=[42206] 00:12:36.932 write: IOPS=507, BW=2032KiB/s (2081kB/s)(2048KiB/1008msec); 0 zone resets 00:12:36.932 slat (nsec): min=6710, max=36260, avg=7692.85, stdev=1884.26 00:12:36.932 clat (usec): min=130, max=261, avg=156.07, stdev=25.55 00:12:36.932 lat (usec): min=137, max=297, avg=163.76, stdev=25.83 00:12:36.932 clat percentiles (usec): 00:12:36.932 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 143], 00:12:36.932 | 30.00th=[ 145], 40.00th=[ 147], 50.00th=[ 149], 60.00th=[ 151], 00:12:36.932 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 172], 95.00th=[ 243], 00:12:36.932 | 99.00th=[ 247], 99.50th=[ 249], 99.90th=[ 262], 99.95th=[ 262], 00:12:36.932 | 99.99th=[ 262] 00:12:36.932 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:12:36.932 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:36.932 lat (usec) : 250=95.51%, 500=0.37% 00:12:36.932 lat (msec) : 50=4.12% 00:12:36.932 cpu : usr=0.30%, sys=0.60%, ctx=534, majf=0, minf=1 00:12:36.932 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:36.932 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.932 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:36.932 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:36.932 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:36.932 00:12:36.932 Run status group 0 (all jobs): 00:12:36.932 READ: bw=87.3KiB/s (89.4kB/s), 87.3KiB/s-87.3KiB/s (89.4kB/s-89.4kB/s), io=88.0KiB (90.1kB), run=1008-1008msec 00:12:36.932 WRITE: bw=2032KiB/s (2081kB/s), 2032KiB/s-2032KiB/s (2081kB/s-2081kB/s), io=2048KiB (2097kB), run=1008-1008msec 00:12:36.932 00:12:36.932 Disk stats (read/write): 00:12:36.932 nvme0n1: ios=69/512, merge=0/0, ticks=825/76, in_queue=901, util=91.58% 00:12:36.932 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:37.189 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:12:37.189 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:37.189 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:12:37.189 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:12:37.189 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:37.190 rmmod nvme_tcp 00:12:37.190 rmmod nvme_fabrics 00:12:37.190 rmmod nvme_keyring 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2017393 ']' 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2017393 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2017393 ']' 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2017393 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2017393 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2017393' 00:12:37.190 killing process with pid 2017393 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2017393 00:12:37.190 06:23:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2017393 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:37.448 06:23:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:39.981 00:12:39.981 real 0m10.141s 00:12:39.981 user 0m22.814s 00:12:39.981 sys 0m2.425s 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:12:39.981 ************************************ 00:12:39.981 END TEST nvmf_nmic 00:12:39.981 ************************************ 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:39.981 ************************************ 00:12:39.981 START TEST nvmf_fio_target 00:12:39.981 ************************************ 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:12:39.981 * Looking for test storage... 00:12:39.981 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:12:39.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.981 --rc genhtml_branch_coverage=1 00:12:39.981 --rc genhtml_function_coverage=1 00:12:39.981 --rc genhtml_legend=1 00:12:39.981 --rc geninfo_all_blocks=1 00:12:39.981 --rc geninfo_unexecuted_blocks=1 00:12:39.981 00:12:39.981 ' 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:12:39.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.981 --rc genhtml_branch_coverage=1 00:12:39.981 --rc genhtml_function_coverage=1 00:12:39.981 --rc genhtml_legend=1 00:12:39.981 --rc geninfo_all_blocks=1 00:12:39.981 --rc geninfo_unexecuted_blocks=1 00:12:39.981 00:12:39.981 ' 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:12:39.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.981 --rc genhtml_branch_coverage=1 00:12:39.981 --rc genhtml_function_coverage=1 00:12:39.981 --rc genhtml_legend=1 00:12:39.981 --rc geninfo_all_blocks=1 00:12:39.981 --rc geninfo_unexecuted_blocks=1 00:12:39.981 00:12:39.981 ' 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:12:39.981 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:39.981 --rc genhtml_branch_coverage=1 00:12:39.981 --rc genhtml_function_coverage=1 00:12:39.981 --rc genhtml_legend=1 00:12:39.981 --rc geninfo_all_blocks=1 00:12:39.981 --rc geninfo_unexecuted_blocks=1 00:12:39.981 00:12:39.981 ' 00:12:39.981 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:39.982 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:12:39.982 06:23:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:12:41.884 Found 0000:09:00.0 (0x8086 - 0x159b) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:12:41.884 Found 0000:09:00.1 (0x8086 - 0x159b) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:12:41.884 Found net devices under 0000:09:00.0: cvl_0_0 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:41.884 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:12:41.885 Found net devices under 0000:09:00.1: cvl_0_1 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:41.885 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:41.885 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:12:41.885 00:12:41.885 --- 10.0.0.2 ping statistics --- 00:12:41.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.885 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:41.885 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:41.885 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:12:41.885 00:12:41.885 --- 10.0.0.1 ping statistics --- 00:12:41.885 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:41.885 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:41.885 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2020120 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2020120 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2020120 ']' 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:42.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:12:42.143 06:23:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.143 [2024-11-20 06:23:13.793319] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:12:42.143 [2024-11-20 06:23:13.793415] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:42.144 [2024-11-20 06:23:13.866188] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:42.144 [2024-11-20 06:23:13.922538] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:42.144 [2024-11-20 06:23:13.922593] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:42.144 [2024-11-20 06:23:13.922620] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:42.144 [2024-11-20 06:23:13.922631] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:42.144 [2024-11-20 06:23:13.922641] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:42.144 [2024-11-20 06:23:13.924260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:42.144 [2024-11-20 06:23:13.924327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:42.144 [2024-11-20 06:23:13.924386] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:42.144 [2024-11-20 06:23:13.924389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:42.401 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:12:42.401 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:12:42.401 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:42.401 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:42.401 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:42.401 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:42.401 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:42.658 [2024-11-20 06:23:14.323124] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:42.658 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:42.915 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:12:42.915 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:43.173 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:12:43.173 06:23:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:43.432 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:12:43.432 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:43.690 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:12:43.690 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:12:44.256 06:23:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:44.256 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:12:44.256 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:44.514 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:12:44.514 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:12:45.079 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:12:45.079 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:12:45.079 06:23:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:45.337 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:45.337 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:45.595 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:12:45.595 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:46.160 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:46.160 [2024-11-20 06:23:17.947375] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:46.160 06:23:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:12:46.419 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:12:46.676 06:23:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.608 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:47.608 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:12:47.608 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.608 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:12:47.608 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:12:47.608 06:23:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:12:49.508 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:12:49.508 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:12:49.508 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.508 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:12:49.508 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.508 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:12:49.508 06:23:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:49.508 [global] 00:12:49.508 thread=1 00:12:49.508 invalidate=1 00:12:49.508 rw=write 00:12:49.508 time_based=1 00:12:49.508 runtime=1 00:12:49.508 ioengine=libaio 00:12:49.508 direct=1 00:12:49.508 bs=4096 00:12:49.508 iodepth=1 00:12:49.508 norandommap=0 00:12:49.508 numjobs=1 00:12:49.508 00:12:49.508 verify_dump=1 00:12:49.508 verify_backlog=512 00:12:49.508 verify_state_save=0 00:12:49.508 do_verify=1 00:12:49.508 verify=crc32c-intel 00:12:49.508 [job0] 00:12:49.508 filename=/dev/nvme0n1 00:12:49.508 [job1] 00:12:49.508 filename=/dev/nvme0n2 00:12:49.508 [job2] 00:12:49.508 filename=/dev/nvme0n3 00:12:49.508 [job3] 00:12:49.508 filename=/dev/nvme0n4 00:12:49.508 Could not set queue depth (nvme0n1) 00:12:49.508 Could not set queue depth (nvme0n2) 00:12:49.508 Could not set queue depth (nvme0n3) 00:12:49.508 Could not set queue depth (nvme0n4) 00:12:49.766 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:49.766 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:49.766 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:49.766 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:49.766 fio-3.35 00:12:49.766 Starting 4 threads 00:12:51.139 00:12:51.139 job0: (groupid=0, jobs=1): err= 0: pid=2021168: Wed Nov 20 06:23:22 2024 00:12:51.139 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8184KiB/1001msec) 00:12:51.139 slat (nsec): min=5445, max=54509, avg=12088.40, stdev=5946.02 00:12:51.139 clat (usec): min=181, max=41995, avg=251.08, stdev=923.66 00:12:51.139 lat (usec): min=188, max=42001, avg=263.17, stdev=923.59 00:12:51.139 clat percentiles (usec): 00:12:51.139 | 1.00th=[ 194], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 212], 00:12:51.139 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:12:51.139 | 70.00th=[ 237], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 285], 00:12:51.139 | 99.00th=[ 326], 99.50th=[ 343], 99.90th=[ 383], 99.95th=[ 420], 00:12:51.139 | 99.99th=[42206] 00:12:51.139 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:12:51.139 slat (nsec): min=6282, max=54123, avg=15182.16, stdev=7678.94 00:12:51.139 clat (usec): min=131, max=2176, avg=201.89, stdev=78.56 00:12:51.139 lat (usec): min=138, max=2187, avg=217.07, stdev=78.65 00:12:51.139 clat percentiles (usec): 00:12:51.139 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 153], 20.00th=[ 163], 00:12:51.139 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 182], 60.00th=[ 188], 00:12:51.139 | 70.00th=[ 198], 80.00th=[ 229], 90.00th=[ 277], 95.00th=[ 330], 00:12:51.139 | 99.00th=[ 420], 99.50th=[ 474], 99.90th=[ 881], 99.95th=[ 1188], 00:12:51.139 | 99.99th=[ 2180] 00:12:51.139 bw ( KiB/s): min= 8192, max= 8192, per=32.38%, avg=8192.00, stdev= 0.00, samples=1 00:12:51.139 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:51.139 lat (usec) : 250=86.69%, 500=13.09%, 750=0.07%, 1000=0.07% 00:12:51.139 lat (msec) : 2=0.02%, 4=0.02%, 50=0.02% 00:12:51.139 cpu : usr=4.20%, sys=7.60%, ctx=4094, majf=0, minf=1 00:12:51.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.139 issued rwts: total=2046,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:51.139 job1: (groupid=0, jobs=1): err= 0: pid=2021190: Wed Nov 20 06:23:22 2024 00:12:51.139 read: IOPS=20, BW=81.7KiB/s (83.7kB/s)(84.0KiB/1028msec) 00:12:51.139 slat (nsec): min=13692, max=17260, avg=15118.33, stdev=1455.92 00:12:51.139 clat (usec): min=40980, max=42043, avg=41820.81, stdev=347.45 00:12:51.139 lat (usec): min=40994, max=42057, avg=41835.93, stdev=347.65 00:12:51.139 clat percentiles (usec): 00:12:51.139 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41681], 00:12:51.139 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:12:51.139 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:51.139 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:51.139 | 99.99th=[42206] 00:12:51.139 write: IOPS=498, BW=1992KiB/s (2040kB/s)(2048KiB/1028msec); 0 zone resets 00:12:51.139 slat (nsec): min=6016, max=58213, avg=14509.03, stdev=8316.98 00:12:51.139 clat (usec): min=154, max=1219, avg=273.42, stdev=93.56 00:12:51.139 lat (usec): min=166, max=1226, avg=287.93, stdev=93.37 00:12:51.139 clat percentiles (usec): 00:12:51.139 | 1.00th=[ 161], 5.00th=[ 174], 10.00th=[ 182], 20.00th=[ 217], 00:12:51.139 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 249], 60.00th=[ 265], 00:12:51.139 | 70.00th=[ 285], 80.00th=[ 338], 90.00th=[ 388], 95.00th=[ 408], 00:12:51.139 | 99.00th=[ 478], 99.50th=[ 898], 99.90th=[ 1221], 99.95th=[ 1221], 00:12:51.139 | 99.99th=[ 1221] 00:12:51.139 bw ( KiB/s): min= 4096, max= 4096, per=16.19%, avg=4096.00, stdev= 0.00, samples=1 00:12:51.139 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:51.139 lat (usec) : 250=49.16%, 500=46.15%, 750=0.19%, 1000=0.38% 00:12:51.139 lat (msec) : 2=0.19%, 50=3.94% 00:12:51.139 cpu : usr=0.39%, sys=0.68%, ctx=534, majf=0, minf=1 00:12:51.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.139 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:51.139 job2: (groupid=0, jobs=1): err= 0: pid=2021201: Wed Nov 20 06:23:22 2024 00:12:51.139 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:51.139 slat (nsec): min=5003, max=53927, avg=14183.98, stdev=6737.03 00:12:51.139 clat (usec): min=190, max=554, avg=321.95, stdev=66.80 00:12:51.139 lat (usec): min=196, max=574, avg=336.13, stdev=69.08 00:12:51.139 clat percentiles (usec): 00:12:51.139 | 1.00th=[ 198], 5.00th=[ 212], 10.00th=[ 223], 20.00th=[ 243], 00:12:51.139 | 30.00th=[ 302], 40.00th=[ 318], 50.00th=[ 330], 60.00th=[ 338], 00:12:51.139 | 70.00th=[ 355], 80.00th=[ 375], 90.00th=[ 400], 95.00th=[ 429], 00:12:51.139 | 99.00th=[ 486], 99.50th=[ 498], 99.90th=[ 537], 99.95th=[ 553], 00:12:51.139 | 99.99th=[ 553] 00:12:51.139 write: IOPS=1943, BW=7772KiB/s (7959kB/s)(7780KiB/1001msec); 0 zone resets 00:12:51.139 slat (nsec): min=6857, max=60641, avg=15846.07, stdev=7787.89 00:12:51.139 clat (usec): min=145, max=416, avg=225.55, stdev=35.94 00:12:51.139 lat (usec): min=153, max=435, avg=241.39, stdev=36.09 00:12:51.139 clat percentiles (usec): 00:12:51.139 | 1.00th=[ 157], 5.00th=[ 172], 10.00th=[ 184], 20.00th=[ 200], 00:12:51.139 | 30.00th=[ 208], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:12:51.139 | 70.00th=[ 237], 80.00th=[ 249], 90.00th=[ 265], 95.00th=[ 297], 00:12:51.139 | 99.00th=[ 347], 99.50th=[ 359], 99.90th=[ 412], 99.95th=[ 416], 00:12:51.139 | 99.99th=[ 416] 00:12:51.139 bw ( KiB/s): min= 8192, max= 8192, per=32.38%, avg=8192.00, stdev= 0.00, samples=1 00:12:51.139 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:51.139 lat (usec) : 250=55.01%, 500=44.79%, 750=0.20% 00:12:51.139 cpu : usr=3.50%, sys=6.70%, ctx=3482, majf=0, minf=1 00:12:51.139 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.139 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.139 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.139 issued rwts: total=1536,1945,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.139 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:51.139 job3: (groupid=0, jobs=1): err= 0: pid=2021202: Wed Nov 20 06:23:22 2024 00:12:51.139 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:12:51.139 slat (nsec): min=4713, max=79308, avg=17198.96, stdev=10894.15 00:12:51.139 clat (usec): min=204, max=1090, avg=312.84, stdev=62.21 00:12:51.139 lat (usec): min=210, max=1124, avg=330.04, stdev=66.19 00:12:51.139 clat percentiles (usec): 00:12:51.139 | 1.00th=[ 212], 5.00th=[ 223], 10.00th=[ 231], 20.00th=[ 247], 00:12:51.139 | 30.00th=[ 269], 40.00th=[ 306], 50.00th=[ 318], 60.00th=[ 330], 00:12:51.140 | 70.00th=[ 351], 80.00th=[ 367], 90.00th=[ 388], 95.00th=[ 400], 00:12:51.140 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 603], 99.95th=[ 1090], 00:12:51.140 | 99.99th=[ 1090] 00:12:51.140 write: IOPS=1994, BW=7976KiB/s (8167kB/s)(7984KiB/1001msec); 0 zone resets 00:12:51.140 slat (nsec): min=6216, max=63719, avg=12965.79, stdev=6377.77 00:12:51.140 clat (usec): min=149, max=2035, avg=226.64, stdev=63.66 00:12:51.140 lat (usec): min=156, max=2050, avg=239.61, stdev=63.99 00:12:51.140 clat percentiles (usec): 00:12:51.140 | 1.00th=[ 159], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 192], 00:12:51.140 | 30.00th=[ 208], 40.00th=[ 215], 50.00th=[ 223], 60.00th=[ 231], 00:12:51.140 | 70.00th=[ 237], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 289], 00:12:51.140 | 99.00th=[ 408], 99.50th=[ 445], 99.90th=[ 1205], 99.95th=[ 2040], 00:12:51.140 | 99.99th=[ 2040] 00:12:51.140 bw ( KiB/s): min= 8192, max= 8192, per=32.38%, avg=8192.00, stdev= 0.00, samples=1 00:12:51.140 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:12:51.140 lat (usec) : 250=56.26%, 500=43.57%, 750=0.08% 00:12:51.140 lat (msec) : 2=0.06%, 4=0.03% 00:12:51.140 cpu : usr=2.40%, sys=5.80%, ctx=3534, majf=0, minf=1 00:12:51.140 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:51.140 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.140 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:51.140 issued rwts: total=1536,1996,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:51.140 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:51.140 00:12:51.140 Run status group 0 (all jobs): 00:12:51.140 READ: bw=19.5MiB/s (20.5MB/s), 81.7KiB/s-8176KiB/s (83.7kB/s-8372kB/s), io=20.1MiB (21.0MB), run=1001-1028msec 00:12:51.140 WRITE: bw=24.7MiB/s (25.9MB/s), 1992KiB/s-8184KiB/s (2040kB/s-8380kB/s), io=25.4MiB (26.6MB), run=1001-1028msec 00:12:51.140 00:12:51.140 Disk stats (read/write): 00:12:51.140 nvme0n1: ios=1586/1867, merge=0/0, ticks=395/328, in_queue=723, util=86.37% 00:12:51.140 nvme0n2: ios=66/512, merge=0/0, ticks=1645/140, in_queue=1785, util=98.07% 00:12:51.140 nvme0n3: ios=1457/1536, merge=0/0, ticks=680/329, in_queue=1009, util=98.01% 00:12:51.140 nvme0n4: ios=1474/1536, merge=0/0, ticks=683/331, in_queue=1014, util=98.10% 00:12:51.140 06:23:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:51.140 [global] 00:12:51.140 thread=1 00:12:51.140 invalidate=1 00:12:51.140 rw=randwrite 00:12:51.140 time_based=1 00:12:51.140 runtime=1 00:12:51.140 ioengine=libaio 00:12:51.140 direct=1 00:12:51.140 bs=4096 00:12:51.140 iodepth=1 00:12:51.140 norandommap=0 00:12:51.140 numjobs=1 00:12:51.140 00:12:51.140 verify_dump=1 00:12:51.140 verify_backlog=512 00:12:51.140 verify_state_save=0 00:12:51.140 do_verify=1 00:12:51.140 verify=crc32c-intel 00:12:51.140 [job0] 00:12:51.140 filename=/dev/nvme0n1 00:12:51.140 [job1] 00:12:51.140 filename=/dev/nvme0n2 00:12:51.140 [job2] 00:12:51.140 filename=/dev/nvme0n3 00:12:51.140 [job3] 00:12:51.140 filename=/dev/nvme0n4 00:12:51.140 Could not set queue depth (nvme0n1) 00:12:51.140 Could not set queue depth (nvme0n2) 00:12:51.140 Could not set queue depth (nvme0n3) 00:12:51.140 Could not set queue depth (nvme0n4) 00:12:51.140 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:51.140 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:51.140 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:51.140 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:51.140 fio-3.35 00:12:51.140 Starting 4 threads 00:12:52.600 00:12:52.600 job0: (groupid=0, jobs=1): err= 0: pid=2021435: Wed Nov 20 06:23:24 2024 00:12:52.600 read: IOPS=1034, BW=4139KiB/s (4239kB/s)(4160KiB/1005msec) 00:12:52.600 slat (nsec): min=5732, max=63391, avg=14326.72, stdev=6974.33 00:12:52.600 clat (usec): min=181, max=41982, avg=657.38, stdev=3977.73 00:12:52.600 lat (usec): min=189, max=41997, avg=671.70, stdev=3978.03 00:12:52.600 clat percentiles (usec): 00:12:52.600 | 1.00th=[ 194], 5.00th=[ 206], 10.00th=[ 217], 20.00th=[ 225], 00:12:52.600 | 30.00th=[ 231], 40.00th=[ 239], 50.00th=[ 255], 60.00th=[ 273], 00:12:52.600 | 70.00th=[ 281], 80.00th=[ 289], 90.00th=[ 310], 95.00th=[ 400], 00:12:52.600 | 99.00th=[ 652], 99.50th=[40633], 99.90th=[42206], 99.95th=[42206], 00:12:52.600 | 99.99th=[42206] 00:12:52.600 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:12:52.600 slat (nsec): min=7244, max=67967, avg=15321.93, stdev=7392.42 00:12:52.600 clat (usec): min=121, max=485, avg=176.92, stdev=40.56 00:12:52.600 lat (usec): min=130, max=496, avg=192.24, stdev=43.11 00:12:52.600 clat percentiles (usec): 00:12:52.600 | 1.00th=[ 130], 5.00th=[ 135], 10.00th=[ 139], 20.00th=[ 145], 00:12:52.600 | 30.00th=[ 153], 40.00th=[ 161], 50.00th=[ 169], 60.00th=[ 178], 00:12:52.600 | 70.00th=[ 186], 80.00th=[ 200], 90.00th=[ 237], 95.00th=[ 260], 00:12:52.600 | 99.00th=[ 302], 99.50th=[ 351], 99.90th=[ 437], 99.95th=[ 486], 00:12:52.600 | 99.99th=[ 486] 00:12:52.600 bw ( KiB/s): min= 3832, max= 8456, per=52.05%, avg=6144.00, stdev=3269.66, samples=2 00:12:52.600 iops : min= 958, max= 2114, avg=1536.00, stdev=817.42, samples=2 00:12:52.600 lat (usec) : 250=74.42%, 500=24.30%, 750=0.89% 00:12:52.600 lat (msec) : 50=0.39% 00:12:52.600 cpu : usr=2.59%, sys=5.38%, ctx=2577, majf=0, minf=1 00:12:52.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:52.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.600 issued rwts: total=1040,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.600 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:52.600 job1: (groupid=0, jobs=1): err= 0: pid=2021436: Wed Nov 20 06:23:24 2024 00:12:52.600 read: IOPS=21, BW=84.5KiB/s (86.6kB/s)(88.0KiB/1041msec) 00:12:52.600 slat (nsec): min=6887, max=37101, avg=15675.45, stdev=5244.77 00:12:52.600 clat (usec): min=40855, max=42024, avg=41385.21, stdev=501.56 00:12:52.600 lat (usec): min=40892, max=42041, avg=41400.89, stdev=500.07 00:12:52.600 clat percentiles (usec): 00:12:52.600 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:52.600 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41681], 00:12:52.600 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:52.600 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:52.600 | 99.99th=[42206] 00:12:52.600 write: IOPS=491, BW=1967KiB/s (2015kB/s)(2048KiB/1041msec); 0 zone resets 00:12:52.600 slat (nsec): min=7757, max=58772, avg=17130.12, stdev=7697.64 00:12:52.600 clat (usec): min=150, max=495, avg=230.41, stdev=34.58 00:12:52.600 lat (usec): min=162, max=517, avg=247.54, stdev=34.05 00:12:52.600 clat percentiles (usec): 00:12:52.600 | 1.00th=[ 165], 5.00th=[ 182], 10.00th=[ 192], 20.00th=[ 206], 00:12:52.600 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:12:52.600 | 70.00th=[ 243], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 281], 00:12:52.600 | 99.00th=[ 343], 99.50th=[ 355], 99.90th=[ 494], 99.95th=[ 494], 00:12:52.600 | 99.99th=[ 494] 00:12:52.600 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:12:52.600 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:52.600 lat (usec) : 250=70.60%, 500=25.28% 00:12:52.600 lat (msec) : 50=4.12% 00:12:52.600 cpu : usr=0.77%, sys=0.96%, ctx=535, majf=0, minf=1 00:12:52.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:52.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.600 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.600 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:52.600 job2: (groupid=0, jobs=1): err= 0: pid=2021437: Wed Nov 20 06:23:24 2024 00:12:52.600 read: IOPS=27, BW=108KiB/s (111kB/s)(112KiB/1037msec) 00:12:52.600 slat (nsec): min=7498, max=22027, avg=16441.36, stdev=3000.56 00:12:52.600 clat (usec): min=189, max=41998, avg=32534.96, stdev=17169.23 00:12:52.600 lat (usec): min=208, max=42014, avg=32551.40, stdev=17169.71 00:12:52.600 clat percentiles (usec): 00:12:52.600 | 1.00th=[ 190], 5.00th=[ 229], 10.00th=[ 229], 20.00th=[ 351], 00:12:52.600 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:52.600 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:52.600 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:52.600 | 99.99th=[42206] 00:12:52.600 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:12:52.600 slat (nsec): min=6772, max=38026, avg=14544.32, stdev=5795.61 00:12:52.600 clat (usec): min=149, max=390, avg=227.26, stdev=26.30 00:12:52.600 lat (usec): min=157, max=406, avg=241.80, stdev=25.84 00:12:52.600 clat percentiles (usec): 00:12:52.600 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 198], 20.00th=[ 208], 00:12:52.600 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 227], 60.00th=[ 235], 00:12:52.600 | 70.00th=[ 241], 80.00th=[ 247], 90.00th=[ 258], 95.00th=[ 265], 00:12:52.600 | 99.00th=[ 285], 99.50th=[ 310], 99.90th=[ 392], 99.95th=[ 392], 00:12:52.600 | 99.99th=[ 392] 00:12:52.600 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:12:52.600 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:52.600 lat (usec) : 250=79.07%, 500=16.85% 00:12:52.600 lat (msec) : 50=4.07% 00:12:52.600 cpu : usr=0.39%, sys=0.77%, ctx=543, majf=0, minf=1 00:12:52.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:52.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.600 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.600 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:52.600 job3: (groupid=0, jobs=1): err= 0: pid=2021438: Wed Nov 20 06:23:24 2024 00:12:52.600 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:12:52.600 slat (nsec): min=14081, max=36546, avg=17496.32, stdev=4488.69 00:12:52.600 clat (usec): min=40690, max=41981, avg=41099.28, stdev=362.61 00:12:52.600 lat (usec): min=40705, max=41996, avg=41116.78, stdev=361.83 00:12:52.600 clat percentiles (usec): 00:12:52.600 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:52.600 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:52.600 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:12:52.600 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:52.600 | 99.99th=[42206] 00:12:52.600 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:12:52.600 slat (nsec): min=6409, max=70129, avg=14401.01, stdev=7467.62 00:12:52.600 clat (usec): min=155, max=488, avg=233.15, stdev=41.84 00:12:52.600 lat (usec): min=163, max=505, avg=247.55, stdev=43.15 00:12:52.600 clat percentiles (usec): 00:12:52.600 | 1.00th=[ 165], 5.00th=[ 178], 10.00th=[ 192], 20.00th=[ 210], 00:12:52.600 | 30.00th=[ 217], 40.00th=[ 223], 50.00th=[ 231], 60.00th=[ 237], 00:12:52.600 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 260], 95.00th=[ 277], 00:12:52.600 | 99.00th=[ 441], 99.50th=[ 457], 99.90th=[ 490], 99.95th=[ 490], 00:12:52.600 | 99.99th=[ 490] 00:12:52.600 bw ( KiB/s): min= 4096, max= 4096, per=34.70%, avg=4096.00, stdev= 0.00, samples=1 00:12:52.600 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:52.600 lat (usec) : 250=76.22%, 500=19.66% 00:12:52.600 lat (msec) : 50=4.12% 00:12:52.600 cpu : usr=0.29%, sys=0.68%, ctx=535, majf=0, minf=1 00:12:52.600 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:52.600 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.600 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:52.600 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:52.600 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:52.600 00:12:52.600 Run status group 0 (all jobs): 00:12:52.600 READ: bw=4273KiB/s (4375kB/s), 84.5KiB/s-4139KiB/s (86.6kB/s-4239kB/s), io=4448KiB (4555kB), run=1005-1041msec 00:12:52.600 WRITE: bw=11.5MiB/s (12.1MB/s), 1967KiB/s-6113KiB/s (2015kB/s-6260kB/s), io=12.0MiB (12.6MB), run=1005-1041msec 00:12:52.600 00:12:52.600 Disk stats (read/write): 00:12:52.600 nvme0n1: ios=1061/1536, merge=0/0, ticks=1483/267, in_queue=1750, util=98.40% 00:12:52.600 nvme0n2: ios=59/512, merge=0/0, ticks=1468/111, in_queue=1579, util=98.68% 00:12:52.600 nvme0n3: ios=81/512, merge=0/0, ticks=912/113, in_queue=1025, util=98.44% 00:12:52.600 nvme0n4: ios=63/512, merge=0/0, ticks=1457/103, in_queue=1560, util=100.00% 00:12:52.600 06:23:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:52.600 [global] 00:12:52.600 thread=1 00:12:52.600 invalidate=1 00:12:52.600 rw=write 00:12:52.600 time_based=1 00:12:52.600 runtime=1 00:12:52.600 ioengine=libaio 00:12:52.600 direct=1 00:12:52.600 bs=4096 00:12:52.600 iodepth=128 00:12:52.600 norandommap=0 00:12:52.600 numjobs=1 00:12:52.600 00:12:52.600 verify_dump=1 00:12:52.600 verify_backlog=512 00:12:52.600 verify_state_save=0 00:12:52.600 do_verify=1 00:12:52.600 verify=crc32c-intel 00:12:52.600 [job0] 00:12:52.600 filename=/dev/nvme0n1 00:12:52.600 [job1] 00:12:52.600 filename=/dev/nvme0n2 00:12:52.600 [job2] 00:12:52.600 filename=/dev/nvme0n3 00:12:52.600 [job3] 00:12:52.600 filename=/dev/nvme0n4 00:12:52.600 Could not set queue depth (nvme0n1) 00:12:52.600 Could not set queue depth (nvme0n2) 00:12:52.600 Could not set queue depth (nvme0n3) 00:12:52.600 Could not set queue depth (nvme0n4) 00:12:52.600 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:52.600 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:52.600 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:52.600 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:52.600 fio-3.35 00:12:52.600 Starting 4 threads 00:12:53.972 00:12:53.972 job0: (groupid=0, jobs=1): err= 0: pid=2021666: Wed Nov 20 06:23:25 2024 00:12:53.972 read: IOPS=4293, BW=16.8MiB/s (17.6MB/s)(16.8MiB/1003msec) 00:12:53.972 slat (usec): min=2, max=17155, avg=109.37, stdev=624.30 00:12:53.972 clat (usec): min=711, max=38300, avg=14641.24, stdev=5458.19 00:12:53.972 lat (usec): min=3321, max=38339, avg=14750.60, stdev=5503.48 00:12:53.972 clat percentiles (usec): 00:12:53.972 | 1.00th=[ 7111], 5.00th=[ 9241], 10.00th=[10552], 20.00th=[10945], 00:12:53.972 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12125], 60.00th=[14484], 00:12:53.972 | 70.00th=[15926], 80.00th=[17695], 90.00th=[21890], 95.00th=[26346], 00:12:53.972 | 99.00th=[33424], 99.50th=[36439], 99.90th=[36439], 99.95th=[36439], 00:12:53.972 | 99.99th=[38536] 00:12:53.972 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:12:53.972 slat (usec): min=2, max=7013, avg=106.56, stdev=524.76 00:12:53.972 clat (usec): min=6476, max=37475, avg=13835.52, stdev=4812.63 00:12:53.972 lat (usec): min=6482, max=37484, avg=13942.08, stdev=4855.21 00:12:53.972 clat percentiles (usec): 00:12:53.972 | 1.00th=[ 8356], 5.00th=[10290], 10.00th=[10814], 20.00th=[11207], 00:12:53.972 | 30.00th=[11469], 40.00th=[11731], 50.00th=[11994], 60.00th=[12256], 00:12:53.972 | 70.00th=[13042], 80.00th=[17171], 90.00th=[19268], 95.00th=[21103], 00:12:53.972 | 99.00th=[34866], 99.50th=[35914], 99.90th=[37487], 99.95th=[37487], 00:12:53.972 | 99.99th=[37487] 00:12:53.972 bw ( KiB/s): min=16384, max=20480, per=31.75%, avg=18432.00, stdev=2896.31, samples=2 00:12:53.972 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:12:53.972 lat (usec) : 750=0.01% 00:12:53.972 lat (msec) : 4=0.36%, 10=5.15%, 20=84.82%, 50=9.66% 00:12:53.972 cpu : usr=5.29%, sys=7.29%, ctx=436, majf=0, minf=1 00:12:53.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:53.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:53.972 issued rwts: total=4306,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:53.972 job1: (groupid=0, jobs=1): err= 0: pid=2021667: Wed Nov 20 06:23:25 2024 00:12:53.972 read: IOPS=4447, BW=17.4MiB/s (18.2MB/s)(17.5MiB/1006msec) 00:12:53.972 slat (usec): min=3, max=9461, avg=103.03, stdev=617.28 00:12:53.972 clat (usec): min=2409, max=27677, avg=12871.44, stdev=3562.38 00:12:53.972 lat (usec): min=5159, max=27697, avg=12974.47, stdev=3608.51 00:12:53.972 clat percentiles (usec): 00:12:53.972 | 1.00th=[ 6783], 5.00th=[ 8586], 10.00th=[ 9765], 20.00th=[10421], 00:12:53.972 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11994], 60.00th=[12649], 00:12:53.972 | 70.00th=[14091], 80.00th=[15139], 90.00th=[18220], 95.00th=[20055], 00:12:53.972 | 99.00th=[23987], 99.50th=[25035], 99.90th=[26346], 99.95th=[26346], 00:12:53.972 | 99.99th=[27657] 00:12:53.972 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:12:53.972 slat (usec): min=4, max=4793, avg=105.84, stdev=349.45 00:12:53.972 clat (usec): min=1700, max=32949, avg=15129.20, stdev=6194.07 00:12:53.972 lat (usec): min=1718, max=32969, avg=15235.05, stdev=6235.79 00:12:53.972 clat percentiles (usec): 00:12:53.972 | 1.00th=[ 5342], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[10945], 00:12:53.972 | 30.00th=[11338], 40.00th=[11469], 50.00th=[11600], 60.00th=[13829], 00:12:53.972 | 70.00th=[18744], 80.00th=[21103], 90.00th=[23987], 95.00th=[27132], 00:12:53.972 | 99.00th=[32637], 99.50th=[32637], 99.90th=[32900], 99.95th=[32900], 00:12:53.972 | 99.99th=[32900] 00:12:53.972 bw ( KiB/s): min=16400, max=20464, per=31.75%, avg=18432.00, stdev=2873.68, samples=2 00:12:53.972 iops : min= 4100, max= 5116, avg=4608.00, stdev=718.42, samples=2 00:12:53.972 lat (msec) : 2=0.12%, 4=0.01%, 10=11.89%, 20=72.61%, 50=15.37% 00:12:53.972 cpu : usr=5.57%, sys=10.45%, ctx=649, majf=0, minf=1 00:12:53.972 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:53.972 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.972 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:53.972 issued rwts: total=4474,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.972 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:53.972 job2: (groupid=0, jobs=1): err= 0: pid=2021668: Wed Nov 20 06:23:25 2024 00:12:53.972 read: IOPS=3038, BW=11.9MiB/s (12.4MB/s)(12.0MiB/1011msec) 00:12:53.972 slat (usec): min=3, max=14352, avg=145.86, stdev=900.80 00:12:53.972 clat (usec): min=5872, max=47434, avg=16849.05, stdev=6661.82 00:12:53.972 lat (usec): min=5890, max=47443, avg=16994.91, stdev=6724.81 00:12:53.972 clat percentiles (usec): 00:12:53.973 | 1.00th=[ 8160], 5.00th=[10159], 10.00th=[11469], 20.00th=[12256], 00:12:53.973 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14484], 60.00th=[15270], 00:12:53.973 | 70.00th=[17957], 80.00th=[19530], 90.00th=[25822], 95.00th=[32113], 00:12:53.973 | 99.00th=[42206], 99.50th=[44827], 99.90th=[47449], 99.95th=[47449], 00:12:53.973 | 99.99th=[47449] 00:12:53.973 write: IOPS=3353, BW=13.1MiB/s (13.7MB/s)(13.2MiB/1011msec); 0 zone resets 00:12:53.973 slat (usec): min=4, max=14451, avg=153.04, stdev=698.82 00:12:53.973 clat (usec): min=1172, max=58701, avg=22583.39, stdev=9993.42 00:12:53.973 lat (usec): min=1192, max=58710, avg=22736.43, stdev=10057.74 00:12:53.973 clat percentiles (usec): 00:12:53.973 | 1.00th=[ 4686], 5.00th=[ 9896], 10.00th=[11469], 20.00th=[14484], 00:12:53.973 | 30.00th=[17695], 40.00th=[20317], 50.00th=[21365], 60.00th=[21627], 00:12:53.973 | 70.00th=[24773], 80.00th=[30802], 90.00th=[36963], 95.00th=[43779], 00:12:53.973 | 99.00th=[48497], 99.50th=[55313], 99.90th=[58459], 99.95th=[58459], 00:12:53.973 | 99.99th=[58459] 00:12:53.973 bw ( KiB/s): min=12424, max=13680, per=22.48%, avg=13052.00, stdev=888.13, samples=2 00:12:53.973 iops : min= 3106, max= 3420, avg=3263.00, stdev=222.03, samples=2 00:12:53.973 lat (msec) : 2=0.03%, 4=0.19%, 10=4.09%, 20=54.72%, 50=40.51% 00:12:53.973 lat (msec) : 100=0.46% 00:12:53.973 cpu : usr=4.36%, sys=6.34%, ctx=398, majf=0, minf=2 00:12:53.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:53.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:53.973 issued rwts: total=3072,3390,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:53.973 job3: (groupid=0, jobs=1): err= 0: pid=2021669: Wed Nov 20 06:23:25 2024 00:12:53.973 read: IOPS=2266, BW=9068KiB/s (9286kB/s)(9476KiB/1045msec) 00:12:53.973 slat (usec): min=3, max=23273, avg=163.35, stdev=1134.36 00:12:53.973 clat (usec): min=8795, max=62573, avg=21191.00, stdev=11709.07 00:12:53.973 lat (usec): min=8813, max=62585, avg=21354.35, stdev=11766.46 00:12:53.973 clat percentiles (usec): 00:12:53.973 | 1.00th=[10421], 5.00th=[13304], 10.00th=[13698], 20.00th=[14222], 00:12:53.973 | 30.00th=[14615], 40.00th=[15139], 50.00th=[16450], 60.00th=[17171], 00:12:53.973 | 70.00th=[20317], 80.00th=[24773], 90.00th=[40633], 95.00th=[52167], 00:12:53.973 | 99.00th=[62129], 99.50th=[62129], 99.90th=[62653], 99.95th=[62653], 00:12:53.973 | 99.99th=[62653] 00:12:53.973 write: IOPS=2449, BW=9799KiB/s (10.0MB/s)(10.0MiB/1045msec); 0 zone resets 00:12:53.973 slat (usec): min=4, max=29864, avg=228.47, stdev=1322.07 00:12:53.973 clat (usec): min=9235, max=77231, avg=31392.28, stdev=15334.91 00:12:53.973 lat (usec): min=11977, max=77271, avg=31620.74, stdev=15418.67 00:12:53.973 clat percentiles (usec): 00:12:53.973 | 1.00th=[13829], 5.00th=[16188], 10.00th=[19268], 20.00th=[20841], 00:12:53.973 | 30.00th=[21365], 40.00th=[21627], 50.00th=[23200], 60.00th=[26870], 00:12:53.973 | 70.00th=[35914], 80.00th=[46400], 90.00th=[56886], 95.00th=[63177], 00:12:53.973 | 99.00th=[73925], 99.50th=[74974], 99.90th=[74974], 99.95th=[74974], 00:12:53.973 | 99.99th=[77071] 00:12:53.973 bw ( KiB/s): min= 9784, max=10696, per=17.64%, avg=10240.00, stdev=644.88, samples=2 00:12:53.973 iops : min= 2446, max= 2674, avg=2560.00, stdev=161.22, samples=2 00:12:53.973 lat (msec) : 10=0.45%, 20=38.87%, 50=49.75%, 100=10.94% 00:12:53.973 cpu : usr=3.64%, sys=5.46%, ctx=312, majf=0, minf=1 00:12:53.973 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:12:53.973 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:53.973 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:53.973 issued rwts: total=2369,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:53.973 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:53.973 00:12:53.973 Run status group 0 (all jobs): 00:12:53.973 READ: bw=53.2MiB/s (55.7MB/s), 9068KiB/s-17.4MiB/s (9286kB/s-18.2MB/s), io=55.6MiB (58.2MB), run=1003-1045msec 00:12:53.973 WRITE: bw=56.7MiB/s (59.4MB/s), 9799KiB/s-17.9MiB/s (10.0MB/s-18.8MB/s), io=59.2MiB (62.1MB), run=1003-1045msec 00:12:53.973 00:12:53.973 Disk stats (read/write): 00:12:53.973 nvme0n1: ios=3634/3679, merge=0/0, ticks=18427/16291, in_queue=34718, util=86.27% 00:12:53.973 nvme0n2: ios=3611/3671, merge=0/0, ticks=24381/27895, in_queue=52276, util=98.98% 00:12:53.973 nvme0n3: ios=2611/2895, merge=0/0, ticks=41233/62259, in_queue=103492, util=90.24% 00:12:53.973 nvme0n4: ios=2071/2223, merge=0/0, ticks=20623/31917, in_queue=52540, util=98.41% 00:12:53.973 06:23:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:53.973 [global] 00:12:53.973 thread=1 00:12:53.973 invalidate=1 00:12:53.973 rw=randwrite 00:12:53.973 time_based=1 00:12:53.973 runtime=1 00:12:53.973 ioengine=libaio 00:12:53.973 direct=1 00:12:53.973 bs=4096 00:12:53.973 iodepth=128 00:12:53.973 norandommap=0 00:12:53.973 numjobs=1 00:12:53.973 00:12:53.973 verify_dump=1 00:12:53.973 verify_backlog=512 00:12:53.973 verify_state_save=0 00:12:53.973 do_verify=1 00:12:53.973 verify=crc32c-intel 00:12:53.973 [job0] 00:12:53.973 filename=/dev/nvme0n1 00:12:53.973 [job1] 00:12:53.973 filename=/dev/nvme0n2 00:12:53.973 [job2] 00:12:53.973 filename=/dev/nvme0n3 00:12:53.973 [job3] 00:12:53.973 filename=/dev/nvme0n4 00:12:53.973 Could not set queue depth (nvme0n1) 00:12:53.973 Could not set queue depth (nvme0n2) 00:12:53.973 Could not set queue depth (nvme0n3) 00:12:53.973 Could not set queue depth (nvme0n4) 00:12:53.973 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:53.973 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:53.973 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:53.973 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:53.973 fio-3.35 00:12:53.973 Starting 4 threads 00:12:55.347 00:12:55.347 job0: (groupid=0, jobs=1): err= 0: pid=2021906: Wed Nov 20 06:23:27 2024 00:12:55.347 read: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec) 00:12:55.347 slat (usec): min=2, max=11316, avg=137.53, stdev=866.11 00:12:55.347 clat (usec): min=10006, max=40844, avg=15703.25, stdev=4843.44 00:12:55.347 lat (usec): min=10017, max=40863, avg=15840.78, stdev=4940.66 00:12:55.347 clat percentiles (usec): 00:12:55.347 | 1.00th=[10159], 5.00th=[10421], 10.00th=[10945], 20.00th=[12518], 00:12:55.347 | 30.00th=[13960], 40.00th=[14091], 50.00th=[14484], 60.00th=[14746], 00:12:55.347 | 70.00th=[15139], 80.00th=[18482], 90.00th=[21890], 95.00th=[23725], 00:12:55.347 | 99.00th=[35914], 99.50th=[39584], 99.90th=[40633], 99.95th=[40633], 00:12:55.347 | 99.99th=[40633] 00:12:55.347 write: IOPS=3411, BW=13.3MiB/s (14.0MB/s)(13.4MiB/1008msec); 0 zone resets 00:12:55.347 slat (usec): min=4, max=11129, avg=157.07, stdev=668.05 00:12:55.347 clat (usec): min=6309, max=53548, avg=23085.23, stdev=10019.42 00:12:55.347 lat (usec): min=6323, max=53580, avg=23242.30, stdev=10079.49 00:12:55.347 clat percentiles (usec): 00:12:55.347 | 1.00th=[ 9503], 5.00th=[10159], 10.00th=[10421], 20.00th=[16057], 00:12:55.347 | 30.00th=[17695], 40.00th=[20317], 50.00th=[21365], 60.00th=[21890], 00:12:55.347 | 70.00th=[22938], 80.00th=[31589], 90.00th=[38536], 95.00th=[42730], 00:12:55.347 | 99.00th=[51119], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:12:55.347 | 99.99th=[53740] 00:12:55.347 bw ( KiB/s): min=11464, max=15032, per=20.70%, avg=13248.00, stdev=2522.96, samples=2 00:12:55.347 iops : min= 2866, max= 3758, avg=3312.00, stdev=630.74, samples=2 00:12:55.347 lat (msec) : 10=2.10%, 20=58.56%, 50=38.01%, 100=1.32% 00:12:55.347 cpu : usr=4.37%, sys=7.75%, ctx=372, majf=0, minf=1 00:12:55.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:12:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:55.347 issued rwts: total=3072,3439,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:55.347 job1: (groupid=0, jobs=1): err= 0: pid=2021913: Wed Nov 20 06:23:27 2024 00:12:55.347 read: IOPS=2554, BW=9.98MiB/s (10.5MB/s)(10.1MiB/1008msec) 00:12:55.347 slat (usec): min=3, max=18068, avg=157.08, stdev=1011.93 00:12:55.347 clat (usec): min=7547, max=53341, avg=19642.13, stdev=7003.65 00:12:55.347 lat (usec): min=8764, max=53375, avg=19799.20, stdev=7099.27 00:12:55.347 clat percentiles (usec): 00:12:55.347 | 1.00th=[10945], 5.00th=[13042], 10.00th=[13304], 20.00th=[14091], 00:12:55.347 | 30.00th=[14746], 40.00th=[15664], 50.00th=[17433], 60.00th=[18744], 00:12:55.347 | 70.00th=[21103], 80.00th=[23725], 90.00th=[32113], 95.00th=[36963], 00:12:55.347 | 99.00th=[36963], 99.50th=[36963], 99.90th=[47973], 99.95th=[48497], 00:12:55.347 | 99.99th=[53216] 00:12:55.347 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:12:55.347 slat (usec): min=3, max=11892, avg=182.93, stdev=763.06 00:12:55.347 clat (usec): min=740, max=51061, avg=24955.21, stdev=7631.10 00:12:55.347 lat (usec): min=9922, max=51070, avg=25138.14, stdev=7661.71 00:12:55.347 clat percentiles (usec): 00:12:55.347 | 1.00th=[10028], 5.00th=[13042], 10.00th=[19530], 20.00th=[20841], 00:12:55.347 | 30.00th=[21627], 40.00th=[21890], 50.00th=[22152], 60.00th=[23725], 00:12:55.347 | 70.00th=[26870], 80.00th=[31065], 90.00th=[34866], 95.00th=[41157], 00:12:55.347 | 99.00th=[50594], 99.50th=[51119], 99.90th=[51119], 99.95th=[51119], 00:12:55.347 | 99.99th=[51119] 00:12:55.347 bw ( KiB/s): min=11472, max=12208, per=18.50%, avg=11840.00, stdev=520.43, samples=2 00:12:55.347 iops : min= 2868, max= 3052, avg=2960.00, stdev=130.11, samples=2 00:12:55.347 lat (usec) : 750=0.02% 00:12:55.347 lat (msec) : 10=0.97%, 20=37.12%, 50=60.76%, 100=1.13% 00:12:55.347 cpu : usr=4.77%, sys=6.36%, ctx=392, majf=0, minf=1 00:12:55.347 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:12:55.347 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.347 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:55.347 issued rwts: total=2575,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.347 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:55.347 job2: (groupid=0, jobs=1): err= 0: pid=2021933: Wed Nov 20 06:23:27 2024 00:12:55.347 read: IOPS=4521, BW=17.7MiB/s (18.5MB/s)(18.5MiB/1047msec) 00:12:55.347 slat (usec): min=2, max=13054, avg=111.80, stdev=795.41 00:12:55.347 clat (usec): min=4179, max=65186, avg=14729.75, stdev=8113.82 00:12:55.347 lat (usec): min=4186, max=65194, avg=14841.54, stdev=8145.39 00:12:55.347 clat percentiles (usec): 00:12:55.347 | 1.00th=[ 5800], 5.00th=[10683], 10.00th=[11469], 20.00th=[11863], 00:12:55.347 | 30.00th=[11863], 40.00th=[11994], 50.00th=[12387], 60.00th=[12649], 00:12:55.347 | 70.00th=[13435], 80.00th=[15533], 90.00th=[19792], 95.00th=[22938], 00:12:55.348 | 99.00th=[60556], 99.50th=[63177], 99.90th=[65274], 99.95th=[65274], 00:12:55.348 | 99.99th=[65274] 00:12:55.348 write: IOPS=4890, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1047msec); 0 zone resets 00:12:55.348 slat (usec): min=4, max=15488, avg=83.44, stdev=554.46 00:12:55.348 clat (usec): min=612, max=65197, avg=12221.72, stdev=4477.34 00:12:55.348 lat (usec): min=632, max=65208, avg=12305.16, stdev=4530.49 00:12:55.348 clat percentiles (usec): 00:12:55.348 | 1.00th=[ 3261], 5.00th=[ 5080], 10.00th=[ 7701], 20.00th=[10290], 00:12:55.348 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12256], 60.00th=[12518], 00:12:55.348 | 70.00th=[12649], 80.00th=[12780], 90.00th=[13698], 95.00th=[20317], 00:12:55.348 | 99.00th=[30016], 99.50th=[30278], 99.90th=[32113], 99.95th=[35914], 00:12:55.348 | 99.99th=[65274] 00:12:55.348 bw ( KiB/s): min=20472, max=20472, per=31.99%, avg=20472.00, stdev= 0.00, samples=2 00:12:55.348 iops : min= 5118, max= 5118, avg=5118.00, stdev= 0.00, samples=2 00:12:55.348 lat (usec) : 750=0.01% 00:12:55.348 lat (msec) : 2=0.03%, 4=0.71%, 10=9.44%, 20=82.37%, 50=6.16% 00:12:55.348 lat (msec) : 100=1.28% 00:12:55.348 cpu : usr=3.82%, sys=7.17%, ctx=505, majf=0, minf=1 00:12:55.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:55.348 issued rwts: total=4734,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:55.348 job3: (groupid=0, jobs=1): err= 0: pid=2021945: Wed Nov 20 06:23:27 2024 00:12:55.348 read: IOPS=4995, BW=19.5MiB/s (20.5MB/s)(19.5MiB/1001msec) 00:12:55.348 slat (usec): min=2, max=6156, avg=98.91, stdev=602.22 00:12:55.348 clat (usec): min=619, max=48924, avg=12550.74, stdev=2893.01 00:12:55.348 lat (usec): min=6380, max=48928, avg=12649.65, stdev=2921.62 00:12:55.348 clat percentiles (usec): 00:12:55.348 | 1.00th=[ 6652], 5.00th=[ 8979], 10.00th=[10290], 20.00th=[11863], 00:12:55.348 | 30.00th=[12125], 40.00th=[12256], 50.00th=[12387], 60.00th=[12518], 00:12:55.348 | 70.00th=[12780], 80.00th=[13173], 90.00th=[14353], 95.00th=[15664], 00:12:55.348 | 99.00th=[17695], 99.50th=[17957], 99.90th=[49021], 99.95th=[49021], 00:12:55.348 | 99.99th=[49021] 00:12:55.348 write: IOPS=5114, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1001msec); 0 zone resets 00:12:55.348 slat (usec): min=3, max=6156, avg=88.76, stdev=435.49 00:12:55.348 clat (usec): min=6897, max=18791, avg=12484.03, stdev=1658.74 00:12:55.348 lat (usec): min=6915, max=18933, avg=12572.79, stdev=1663.31 00:12:55.348 clat percentiles (usec): 00:12:55.348 | 1.00th=[ 7242], 5.00th=[ 8848], 10.00th=[10945], 20.00th=[11994], 00:12:55.348 | 30.00th=[12256], 40.00th=[12518], 50.00th=[12649], 60.00th=[12911], 00:12:55.348 | 70.00th=[13042], 80.00th=[13173], 90.00th=[13435], 95.00th=[15139], 00:12:55.348 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18482], 00:12:55.348 | 99.99th=[18744] 00:12:55.348 bw ( KiB/s): min=20480, max=20480, per=32.00%, avg=20480.00, stdev= 0.00, samples=1 00:12:55.348 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=1 00:12:55.348 lat (usec) : 750=0.01% 00:12:55.348 lat (msec) : 10=8.24%, 20=91.55%, 50=0.20% 00:12:55.348 cpu : usr=6.90%, sys=9.40%, ctx=546, majf=0, minf=1 00:12:55.348 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:55.348 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:55.348 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:55.348 issued rwts: total=5000,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:55.348 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:55.348 00:12:55.348 Run status group 0 (all jobs): 00:12:55.348 READ: bw=57.4MiB/s (60.2MB/s), 9.98MiB/s-19.5MiB/s (10.5MB/s-20.5MB/s), io=60.1MiB (63.0MB), run=1001-1047msec 00:12:55.348 WRITE: bw=62.5MiB/s (65.5MB/s), 11.9MiB/s-20.0MiB/s (12.5MB/s-20.9MB/s), io=65.4MiB (68.6MB), run=1001-1047msec 00:12:55.348 00:12:55.348 Disk stats (read/write): 00:12:55.348 nvme0n1: ios=2610/2935, merge=0/0, ticks=20239/30635, in_queue=50874, util=86.67% 00:12:55.348 nvme0n2: ios=2195/2560, merge=0/0, ticks=22158/31401, in_queue=53559, util=93.50% 00:12:55.348 nvme0n3: ios=4155/4167, merge=0/0, ticks=49638/42111, in_queue=91749, util=97.91% 00:12:55.348 nvme0n4: ios=4146/4503, merge=0/0, ticks=25934/26169, in_queue=52103, util=98.95% 00:12:55.348 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:55.348 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2022135 00:12:55.348 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:55.348 06:23:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:55.348 [global] 00:12:55.348 thread=1 00:12:55.348 invalidate=1 00:12:55.348 rw=read 00:12:55.348 time_based=1 00:12:55.348 runtime=10 00:12:55.348 ioengine=libaio 00:12:55.348 direct=1 00:12:55.348 bs=4096 00:12:55.348 iodepth=1 00:12:55.348 norandommap=1 00:12:55.348 numjobs=1 00:12:55.348 00:12:55.348 [job0] 00:12:55.348 filename=/dev/nvme0n1 00:12:55.348 [job1] 00:12:55.348 filename=/dev/nvme0n2 00:12:55.348 [job2] 00:12:55.348 filename=/dev/nvme0n3 00:12:55.348 [job3] 00:12:55.348 filename=/dev/nvme0n4 00:12:55.348 Could not set queue depth (nvme0n1) 00:12:55.348 Could not set queue depth (nvme0n2) 00:12:55.348 Could not set queue depth (nvme0n3) 00:12:55.348 Could not set queue depth (nvme0n4) 00:12:55.605 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:55.606 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:55.606 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:55.606 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:55.606 fio-3.35 00:12:55.606 Starting 4 threads 00:12:58.885 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:58.885 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:58.885 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=638976, buflen=4096 00:12:58.885 fio: pid=2022254, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:58.885 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:58.885 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:58.885 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=319488, buflen=4096 00:12:58.885 fio: pid=2022253, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:59.183 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:59.183 06:23:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:59.183 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=4894720, buflen=4096 00:12:59.183 fio: pid=2022251, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:59.442 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:59.442 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:59.442 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=3010560, buflen=4096 00:12:59.442 fio: pid=2022252, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:59.442 00:12:59.442 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2022251: Wed Nov 20 06:23:31 2024 00:12:59.442 read: IOPS=340, BW=1361KiB/s (1394kB/s)(4780KiB/3512msec) 00:12:59.442 slat (usec): min=4, max=15690, avg=41.01, stdev=684.18 00:12:59.442 clat (usec): min=183, max=42172, avg=2875.93, stdev=10055.56 00:12:59.442 lat (usec): min=190, max=51961, avg=2916.96, stdev=10108.70 00:12:59.442 clat percentiles (usec): 00:12:59.442 | 1.00th=[ 202], 5.00th=[ 225], 10.00th=[ 237], 20.00th=[ 243], 00:12:59.442 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 255], 00:12:59.442 | 70.00th=[ 260], 80.00th=[ 269], 90.00th=[ 293], 95.00th=[41157], 00:12:59.442 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:59.442 | 99.99th=[42206] 00:12:59.442 bw ( KiB/s): min= 96, max= 7552, per=59.32%, avg=1353.33, stdev=3036.76, samples=6 00:12:59.442 iops : min= 24, max= 1888, avg=338.33, stdev=759.19, samples=6 00:12:59.442 lat (usec) : 250=40.05%, 500=53.01%, 750=0.50% 00:12:59.442 lat (msec) : 50=6.35% 00:12:59.442 cpu : usr=0.11%, sys=0.43%, ctx=1202, majf=0, minf=2 00:12:59.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.442 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.442 issued rwts: total=1196,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.442 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2022252: Wed Nov 20 06:23:31 2024 00:12:59.442 read: IOPS=193, BW=775KiB/s (793kB/s)(2940KiB/3795msec) 00:12:59.442 slat (usec): min=5, max=15724, avg=63.98, stdev=792.20 00:12:59.442 clat (usec): min=167, max=49843, avg=5075.36, stdev=13256.84 00:12:59.442 lat (usec): min=173, max=49989, avg=5139.39, stdev=13327.61 00:12:59.442 clat percentiles (usec): 00:12:59.442 | 1.00th=[ 186], 5.00th=[ 204], 10.00th=[ 217], 20.00th=[ 231], 00:12:59.442 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:12:59.442 | 70.00th=[ 273], 80.00th=[ 285], 90.00th=[41157], 95.00th=[41157], 00:12:59.442 | 99.00th=[42206], 99.50th=[42206], 99.90th=[50070], 99.95th=[50070], 00:12:59.442 | 99.99th=[50070] 00:12:59.442 bw ( KiB/s): min= 96, max= 2416, per=34.50%, avg=787.14, stdev=941.79, samples=7 00:12:59.442 iops : min= 24, max= 604, avg=196.57, stdev=235.43, samples=7 00:12:59.442 lat (usec) : 250=26.09%, 500=62.09% 00:12:59.442 lat (msec) : 50=11.68% 00:12:59.442 cpu : usr=0.03%, sys=0.32%, ctx=740, majf=0, minf=1 00:12:59.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.442 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.442 issued rwts: total=736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.442 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2022253: Wed Nov 20 06:23:31 2024 00:12:59.442 read: IOPS=24, BW=96.8KiB/s (99.2kB/s)(312KiB/3222msec) 00:12:59.442 slat (nsec): min=13250, max=46238, avg=23777.32, stdev=9514.78 00:12:59.442 clat (usec): min=546, max=42152, avg=40972.89, stdev=4663.86 00:12:59.442 lat (usec): min=588, max=42167, avg=40996.73, stdev=4661.93 00:12:59.442 clat percentiles (usec): 00:12:59.442 | 1.00th=[ 545], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:59.442 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[41681], 00:12:59.442 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:59.442 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:59.442 | 99.99th=[42206] 00:12:59.442 bw ( KiB/s): min= 96, max= 104, per=4.25%, avg=97.33, stdev= 3.27, samples=6 00:12:59.442 iops : min= 24, max= 26, avg=24.33, stdev= 0.82, samples=6 00:12:59.442 lat (usec) : 750=1.27% 00:12:59.442 lat (msec) : 50=97.47% 00:12:59.442 cpu : usr=0.00%, sys=0.12%, ctx=80, majf=0, minf=2 00:12:59.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.442 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.442 issued rwts: total=79,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.442 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2022254: Wed Nov 20 06:23:31 2024 00:12:59.442 read: IOPS=53, BW=212KiB/s (217kB/s)(624KiB/2948msec) 00:12:59.442 slat (nsec): min=7205, max=49221, avg=22349.24, stdev=10651.40 00:12:59.442 clat (usec): min=202, max=42148, avg=18716.05, stdev=20488.83 00:12:59.442 lat (usec): min=211, max=42181, avg=18738.44, stdev=20491.27 00:12:59.442 clat percentiles (usec): 00:12:59.442 | 1.00th=[ 208], 5.00th=[ 262], 10.00th=[ 269], 20.00th=[ 281], 00:12:59.442 | 30.00th=[ 289], 40.00th=[ 302], 50.00th=[ 355], 60.00th=[41157], 00:12:59.442 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:12:59.442 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:59.442 | 99.99th=[42206] 00:12:59.442 bw ( KiB/s): min= 104, max= 648, per=10.00%, avg=228.80, stdev=235.26, samples=5 00:12:59.442 iops : min= 26, max= 162, avg=57.20, stdev=58.81, samples=5 00:12:59.442 lat (usec) : 250=3.82%, 500=50.96% 00:12:59.442 lat (msec) : 50=44.59% 00:12:59.442 cpu : usr=0.10%, sys=0.10%, ctx=159, majf=0, minf=1 00:12:59.442 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:59.442 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.442 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:59.442 issued rwts: total=157,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:59.442 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:59.442 00:12:59.442 Run status group 0 (all jobs): 00:12:59.442 READ: bw=2281KiB/s (2336kB/s), 96.8KiB/s-1361KiB/s (99.2kB/s-1394kB/s), io=8656KiB (8864kB), run=2948-3795msec 00:12:59.442 00:12:59.442 Disk stats (read/write): 00:12:59.442 nvme0n1: ios=1190/0, merge=0/0, ticks=3265/0, in_queue=3265, util=94.96% 00:12:59.442 nvme0n2: ios=730/0, merge=0/0, ticks=3522/0, in_queue=3522, util=95.77% 00:12:59.442 nvme0n3: ios=75/0, merge=0/0, ticks=3074/0, in_queue=3074, util=96.82% 00:12:59.442 nvme0n4: ios=200/0, merge=0/0, ticks=3861/0, in_queue=3861, util=99.66% 00:12:59.700 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:59.700 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:59.957 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:59.957 06:23:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:13:00.521 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:00.521 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:13:00.521 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:13:00.521 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2022135 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:01.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:13:01.087 nvmf hotplug test: fio failed as expected 00:13:01.087 06:23:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:01.345 rmmod nvme_tcp 00:13:01.345 rmmod nvme_fabrics 00:13:01.345 rmmod nvme_keyring 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2020120 ']' 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2020120 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2020120 ']' 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2020120 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2020120 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2020120' 00:13:01.345 killing process with pid 2020120 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2020120 00:13:01.345 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2020120 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:01.604 06:23:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:04.147 00:13:04.147 real 0m24.181s 00:13:04.147 user 1m25.626s 00:13:04.147 sys 0m6.190s 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:13:04.147 ************************************ 00:13:04.147 END TEST nvmf_fio_target 00:13:04.147 ************************************ 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:04.147 ************************************ 00:13:04.147 START TEST nvmf_bdevio 00:13:04.147 ************************************ 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:13:04.147 * Looking for test storage... 00:13:04.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:04.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.147 --rc genhtml_branch_coverage=1 00:13:04.147 --rc genhtml_function_coverage=1 00:13:04.147 --rc genhtml_legend=1 00:13:04.147 --rc geninfo_all_blocks=1 00:13:04.147 --rc geninfo_unexecuted_blocks=1 00:13:04.147 00:13:04.147 ' 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:04.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.147 --rc genhtml_branch_coverage=1 00:13:04.147 --rc genhtml_function_coverage=1 00:13:04.147 --rc genhtml_legend=1 00:13:04.147 --rc geninfo_all_blocks=1 00:13:04.147 --rc geninfo_unexecuted_blocks=1 00:13:04.147 00:13:04.147 ' 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:04.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.147 --rc genhtml_branch_coverage=1 00:13:04.147 --rc genhtml_function_coverage=1 00:13:04.147 --rc genhtml_legend=1 00:13:04.147 --rc geninfo_all_blocks=1 00:13:04.147 --rc geninfo_unexecuted_blocks=1 00:13:04.147 00:13:04.147 ' 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:04.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:04.147 --rc genhtml_branch_coverage=1 00:13:04.147 --rc genhtml_function_coverage=1 00:13:04.147 --rc genhtml_legend=1 00:13:04.147 --rc geninfo_all_blocks=1 00:13:04.147 --rc geninfo_unexecuted_blocks=1 00:13:04.147 00:13:04.147 ' 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:04.147 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:04.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:13:04.148 06:23:35 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:06.052 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:06.053 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:06.053 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:06.053 Found net devices under 0000:09:00.0: cvl_0_0 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:06.053 Found net devices under 0000:09:00.1: cvl_0_1 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:06.053 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:06.312 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.312 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:13:06.312 00:13:06.312 --- 10.0.0.2 ping statistics --- 00:13:06.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.312 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.312 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.312 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:13:06.312 00:13:06.312 --- 10.0.0.1 ping statistics --- 00:13:06.312 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.312 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2024897 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2024897 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2024897 ']' 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.312 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:06.312 06:23:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:06.312 [2024-11-20 06:23:38.035895] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:13:06.312 [2024-11-20 06:23:38.035984] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.312 [2024-11-20 06:23:38.113493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:06.571 [2024-11-20 06:23:38.176013] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.571 [2024-11-20 06:23:38.176062] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.571 [2024-11-20 06:23:38.176092] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.571 [2024-11-20 06:23:38.176104] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.571 [2024-11-20 06:23:38.176115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.571 [2024-11-20 06:23:38.177827] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:13:06.571 [2024-11-20 06:23:38.177871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:13:06.571 [2024-11-20 06:23:38.177954] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:13:06.571 [2024-11-20 06:23:38.177958] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:06.571 [2024-11-20 06:23:38.334924] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:06.571 Malloc0 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:06.571 [2024-11-20 06:23:38.400172] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.571 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:06.830 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:13:06.830 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:13:06.830 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:13:06.830 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:13:06.830 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:13:06.830 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:13:06.830 { 00:13:06.830 "params": { 00:13:06.830 "name": "Nvme$subsystem", 00:13:06.830 "trtype": "$TEST_TRANSPORT", 00:13:06.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:13:06.830 "adrfam": "ipv4", 00:13:06.830 "trsvcid": "$NVMF_PORT", 00:13:06.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:13:06.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:13:06.830 "hdgst": ${hdgst:-false}, 00:13:06.830 "ddgst": ${ddgst:-false} 00:13:06.830 }, 00:13:06.830 "method": "bdev_nvme_attach_controller" 00:13:06.830 } 00:13:06.830 EOF 00:13:06.830 )") 00:13:06.830 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:13:06.830 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:13:06.830 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:13:06.830 06:23:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:13:06.830 "params": { 00:13:06.830 "name": "Nvme1", 00:13:06.830 "trtype": "tcp", 00:13:06.830 "traddr": "10.0.0.2", 00:13:06.830 "adrfam": "ipv4", 00:13:06.830 "trsvcid": "4420", 00:13:06.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:13:06.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:13:06.830 "hdgst": false, 00:13:06.830 "ddgst": false 00:13:06.830 }, 00:13:06.830 "method": "bdev_nvme_attach_controller" 00:13:06.830 }' 00:13:06.830 [2024-11-20 06:23:38.452931] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:13:06.830 [2024-11-20 06:23:38.453000] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2024927 ] 00:13:06.830 [2024-11-20 06:23:38.526084] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.830 [2024-11-20 06:23:38.591371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.830 [2024-11-20 06:23:38.591422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.830 [2024-11-20 06:23:38.591426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:07.088 I/O targets: 00:13:07.088 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:13:07.088 00:13:07.088 00:13:07.088 CUnit - A unit testing framework for C - Version 2.1-3 00:13:07.088 http://cunit.sourceforge.net/ 00:13:07.088 00:13:07.088 00:13:07.088 Suite: bdevio tests on: Nvme1n1 00:13:07.088 Test: blockdev write read block ...passed 00:13:07.088 Test: blockdev write zeroes read block ...passed 00:13:07.088 Test: blockdev write zeroes read no split ...passed 00:13:07.088 Test: blockdev write zeroes read split ...passed 00:13:07.346 Test: blockdev write zeroes read split partial ...passed 00:13:07.346 Test: blockdev reset ...[2024-11-20 06:23:38.935250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:13:07.346 [2024-11-20 06:23:38.935360] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xefc640 (9): Bad file descriptor 00:13:07.346 [2024-11-20 06:23:39.072135] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:13:07.346 passed 00:13:07.346 Test: blockdev write read 8 blocks ...passed 00:13:07.346 Test: blockdev write read size > 128k ...passed 00:13:07.346 Test: blockdev write read invalid size ...passed 00:13:07.346 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:13:07.346 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:13:07.346 Test: blockdev write read max offset ...passed 00:13:07.603 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:13:07.603 Test: blockdev writev readv 8 blocks ...passed 00:13:07.603 Test: blockdev writev readv 30 x 1block ...passed 00:13:07.603 Test: blockdev writev readv block ...passed 00:13:07.603 Test: blockdev writev readv size > 128k ...passed 00:13:07.603 Test: blockdev writev readv size > 128k in two iovs ...passed 00:13:07.603 Test: blockdev comparev and writev ...[2024-11-20 06:23:39.282782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.603 [2024-11-20 06:23:39.282819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:13:07.603 [2024-11-20 06:23:39.282844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.603 [2024-11-20 06:23:39.282861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:13:07.603 [2024-11-20 06:23:39.283208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.603 [2024-11-20 06:23:39.283233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:13:07.603 [2024-11-20 06:23:39.283255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.603 [2024-11-20 06:23:39.283272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:13:07.603 [2024-11-20 06:23:39.283635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.603 [2024-11-20 06:23:39.283659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:13:07.603 [2024-11-20 06:23:39.283681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.603 [2024-11-20 06:23:39.283698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:13:07.603 [2024-11-20 06:23:39.284039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.603 [2024-11-20 06:23:39.284063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:13:07.603 [2024-11-20 06:23:39.284084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:13:07.603 [2024-11-20 06:23:39.284101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:13:07.603 passed 00:13:07.603 Test: blockdev nvme passthru rw ...passed 00:13:07.603 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:23:39.367572] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.603 [2024-11-20 06:23:39.367600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:13:07.603 [2024-11-20 06:23:39.367746] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.603 [2024-11-20 06:23:39.367769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:13:07.603 [2024-11-20 06:23:39.367917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.603 [2024-11-20 06:23:39.367940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:13:07.603 [2024-11-20 06:23:39.368084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:13:07.603 [2024-11-20 06:23:39.368113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:13:07.603 passed 00:13:07.603 Test: blockdev nvme admin passthru ...passed 00:13:07.603 Test: blockdev copy ...passed 00:13:07.603 00:13:07.603 Run Summary: Type Total Ran Passed Failed Inactive 00:13:07.603 suites 1 1 n/a 0 0 00:13:07.603 tests 23 23 23 0 0 00:13:07.603 asserts 152 152 152 0 n/a 00:13:07.603 00:13:07.603 Elapsed time = 1.290 seconds 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:07.863 rmmod nvme_tcp 00:13:07.863 rmmod nvme_fabrics 00:13:07.863 rmmod nvme_keyring 00:13:07.863 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2024897 ']' 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2024897 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2024897 ']' 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2024897 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2024897 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2024897' 00:13:08.121 killing process with pid 2024897 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2024897 00:13:08.121 06:23:39 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2024897 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:08.382 06:23:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.285 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:10.285 00:13:10.285 real 0m6.567s 00:13:10.285 user 0m10.386s 00:13:10.285 sys 0m2.233s 00:13:10.285 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:10.285 06:23:42 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:13:10.285 ************************************ 00:13:10.285 END TEST nvmf_bdevio 00:13:10.285 ************************************ 00:13:10.285 06:23:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:13:10.285 00:13:10.285 real 3m56.320s 00:13:10.285 user 10m15.784s 00:13:10.285 sys 1m7.799s 00:13:10.285 06:23:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:10.285 06:23:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:13:10.285 ************************************ 00:13:10.285 END TEST nvmf_target_core 00:13:10.285 ************************************ 00:13:10.285 06:23:42 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:10.285 06:23:42 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:10.285 06:23:42 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:10.285 06:23:42 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:10.544 ************************************ 00:13:10.544 START TEST nvmf_target_extra 00:13:10.544 ************************************ 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:13:10.544 * Looking for test storage... 00:13:10.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lcov --version 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.544 --rc genhtml_branch_coverage=1 00:13:10.544 --rc genhtml_function_coverage=1 00:13:10.544 --rc genhtml_legend=1 00:13:10.544 --rc geninfo_all_blocks=1 00:13:10.544 --rc geninfo_unexecuted_blocks=1 00:13:10.544 00:13:10.544 ' 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.544 --rc genhtml_branch_coverage=1 00:13:10.544 --rc genhtml_function_coverage=1 00:13:10.544 --rc genhtml_legend=1 00:13:10.544 --rc geninfo_all_blocks=1 00:13:10.544 --rc geninfo_unexecuted_blocks=1 00:13:10.544 00:13:10.544 ' 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.544 --rc genhtml_branch_coverage=1 00:13:10.544 --rc genhtml_function_coverage=1 00:13:10.544 --rc genhtml_legend=1 00:13:10.544 --rc geninfo_all_blocks=1 00:13:10.544 --rc geninfo_unexecuted_blocks=1 00:13:10.544 00:13:10.544 ' 00:13:10.544 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:10.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.544 --rc genhtml_branch_coverage=1 00:13:10.544 --rc genhtml_function_coverage=1 00:13:10.544 --rc genhtml_legend=1 00:13:10.544 --rc geninfo_all_blocks=1 00:13:10.544 --rc geninfo_unexecuted_blocks=1 00:13:10.544 00:13:10.545 ' 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:10.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:10.545 ************************************ 00:13:10.545 START TEST nvmf_example 00:13:10.545 ************************************ 00:13:10.545 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:13:10.804 * Looking for test storage... 00:13:10.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lcov --version 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.804 --rc genhtml_branch_coverage=1 00:13:10.804 --rc genhtml_function_coverage=1 00:13:10.804 --rc genhtml_legend=1 00:13:10.804 --rc geninfo_all_blocks=1 00:13:10.804 --rc geninfo_unexecuted_blocks=1 00:13:10.804 00:13:10.804 ' 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.804 --rc genhtml_branch_coverage=1 00:13:10.804 --rc genhtml_function_coverage=1 00:13:10.804 --rc genhtml_legend=1 00:13:10.804 --rc geninfo_all_blocks=1 00:13:10.804 --rc geninfo_unexecuted_blocks=1 00:13:10.804 00:13:10.804 ' 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.804 --rc genhtml_branch_coverage=1 00:13:10.804 --rc genhtml_function_coverage=1 00:13:10.804 --rc genhtml_legend=1 00:13:10.804 --rc geninfo_all_blocks=1 00:13:10.804 --rc geninfo_unexecuted_blocks=1 00:13:10.804 00:13:10.804 ' 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:10.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:10.804 --rc genhtml_branch_coverage=1 00:13:10.804 --rc genhtml_function_coverage=1 00:13:10.804 --rc genhtml_legend=1 00:13:10.804 --rc geninfo_all_blocks=1 00:13:10.804 --rc geninfo_unexecuted_blocks=1 00:13:10.804 00:13:10.804 ' 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:10.804 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:10.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:13:10.805 06:23:42 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:13.341 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:13.342 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:13.342 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:13.342 Found net devices under 0000:09:00.0: cvl_0_0 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:13.342 Found net devices under 0000:09:00.1: cvl_0_1 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:13.342 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.342 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.300 ms 00:13:13.342 00:13:13.342 --- 10.0.0.2 ping statistics --- 00:13:13.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.342 rtt min/avg/max/mdev = 0.300/0.300/0.300/0.000 ms 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.342 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.342 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:13:13.342 00:13:13.342 --- 10.0.0.1 ping statistics --- 00:13:13.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.342 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2027179 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2027179 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # '[' -z 2027179 ']' 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:13.342 06:23:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@866 -- # return 0 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:13:14.276 06:23:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:26.480 Initializing NVMe Controllers 00:13:26.480 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:26.480 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:26.480 Initialization complete. Launching workers. 00:13:26.480 ======================================================== 00:13:26.480 Latency(us) 00:13:26.480 Device Information : IOPS MiB/s Average min max 00:13:26.480 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14562.73 56.89 4394.76 783.20 16327.95 00:13:26.480 ======================================================== 00:13:26.480 Total : 14562.73 56.89 4394.76 783.20 16327.95 00:13:26.480 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:13:26.480 rmmod nvme_tcp 00:13:26.480 rmmod nvme_fabrics 00:13:26.480 rmmod nvme_keyring 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 2027179 ']' 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 2027179 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # '[' -z 2027179 ']' 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # kill -0 2027179 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # uname 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2027179 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # process_name=nvmf 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@962 -- # '[' nvmf = sudo ']' 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2027179' 00:13:26.480 killing process with pid 2027179 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@971 -- # kill 2027179 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@976 -- # wait 2027179 00:13:26.480 nvmf threads initialize successfully 00:13:26.480 bdev subsystem init successfully 00:13:26.480 created a nvmf target service 00:13:26.480 create targets's poll groups done 00:13:26.480 all subsystems of target started 00:13:26.480 nvmf target is running 00:13:26.480 all subsystems of target stopped 00:13:26.480 destroy targets's poll groups done 00:13:26.480 destroyed the nvmf target service 00:13:26.480 bdev subsystem finish successfully 00:13:26.480 nvmf threads destroy successfully 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:13:26.480 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:13:26.481 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:13:26.481 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:13:26.481 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.481 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.481 06:23:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.051 00:13:27.051 real 0m16.290s 00:13:27.051 user 0m45.892s 00:13:27.051 sys 0m3.423s 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:13:27.051 ************************************ 00:13:27.051 END TEST nvmf_example 00:13:27.051 ************************************ 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:27.051 ************************************ 00:13:27.051 START TEST nvmf_filesystem 00:13:27.051 ************************************ 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:13:27.051 * Looking for test storage... 00:13:27.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:27.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.051 --rc genhtml_branch_coverage=1 00:13:27.051 --rc genhtml_function_coverage=1 00:13:27.051 --rc genhtml_legend=1 00:13:27.051 --rc geninfo_all_blocks=1 00:13:27.051 --rc geninfo_unexecuted_blocks=1 00:13:27.051 00:13:27.051 ' 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:27.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.051 --rc genhtml_branch_coverage=1 00:13:27.051 --rc genhtml_function_coverage=1 00:13:27.051 --rc genhtml_legend=1 00:13:27.051 --rc geninfo_all_blocks=1 00:13:27.051 --rc geninfo_unexecuted_blocks=1 00:13:27.051 00:13:27.051 ' 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:27.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.051 --rc genhtml_branch_coverage=1 00:13:27.051 --rc genhtml_function_coverage=1 00:13:27.051 --rc genhtml_legend=1 00:13:27.051 --rc geninfo_all_blocks=1 00:13:27.051 --rc geninfo_unexecuted_blocks=1 00:13:27.051 00:13:27.051 ' 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:27.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.051 --rc genhtml_branch_coverage=1 00:13:27.051 --rc genhtml_function_coverage=1 00:13:27.051 --rc genhtml_legend=1 00:13:27.051 --rc geninfo_all_blocks=1 00:13:27.051 --rc geninfo_unexecuted_blocks=1 00:13:27.051 00:13:27.051 ' 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:13:27.051 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:13:27.052 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:13:27.052 #define SPDK_CONFIG_H 00:13:27.052 #define SPDK_CONFIG_AIO_FSDEV 1 00:13:27.052 #define SPDK_CONFIG_APPS 1 00:13:27.052 #define SPDK_CONFIG_ARCH native 00:13:27.052 #undef SPDK_CONFIG_ASAN 00:13:27.052 #undef SPDK_CONFIG_AVAHI 00:13:27.052 #undef SPDK_CONFIG_CET 00:13:27.052 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:13:27.052 #define SPDK_CONFIG_COVERAGE 1 00:13:27.052 #define SPDK_CONFIG_CROSS_PREFIX 00:13:27.052 #undef SPDK_CONFIG_CRYPTO 00:13:27.052 #undef SPDK_CONFIG_CRYPTO_MLX5 00:13:27.052 #undef SPDK_CONFIG_CUSTOMOCF 00:13:27.052 #undef SPDK_CONFIG_DAOS 00:13:27.052 #define SPDK_CONFIG_DAOS_DIR 00:13:27.052 #define SPDK_CONFIG_DEBUG 1 00:13:27.052 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:13:27.052 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:13:27.052 #define SPDK_CONFIG_DPDK_INC_DIR 00:13:27.052 #define SPDK_CONFIG_DPDK_LIB_DIR 00:13:27.052 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:13:27.052 #undef SPDK_CONFIG_DPDK_UADK 00:13:27.052 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:13:27.052 #define SPDK_CONFIG_EXAMPLES 1 00:13:27.052 #undef SPDK_CONFIG_FC 00:13:27.052 #define SPDK_CONFIG_FC_PATH 00:13:27.052 #define SPDK_CONFIG_FIO_PLUGIN 1 00:13:27.052 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:13:27.052 #define SPDK_CONFIG_FSDEV 1 00:13:27.052 #undef SPDK_CONFIG_FUSE 00:13:27.052 #undef SPDK_CONFIG_FUZZER 00:13:27.052 #define SPDK_CONFIG_FUZZER_LIB 00:13:27.052 #undef SPDK_CONFIG_GOLANG 00:13:27.052 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:13:27.052 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:13:27.052 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:13:27.052 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:13:27.052 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:13:27.052 #undef SPDK_CONFIG_HAVE_LIBBSD 00:13:27.053 #undef SPDK_CONFIG_HAVE_LZ4 00:13:27.053 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:13:27.053 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:13:27.053 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:13:27.053 #define SPDK_CONFIG_IDXD 1 00:13:27.053 #define SPDK_CONFIG_IDXD_KERNEL 1 00:13:27.053 #undef SPDK_CONFIG_IPSEC_MB 00:13:27.053 #define SPDK_CONFIG_IPSEC_MB_DIR 00:13:27.053 #define SPDK_CONFIG_ISAL 1 00:13:27.053 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:13:27.053 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:13:27.053 #define SPDK_CONFIG_LIBDIR 00:13:27.053 #undef SPDK_CONFIG_LTO 00:13:27.053 #define SPDK_CONFIG_MAX_LCORES 128 00:13:27.053 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:13:27.053 #define SPDK_CONFIG_NVME_CUSE 1 00:13:27.053 #undef SPDK_CONFIG_OCF 00:13:27.053 #define SPDK_CONFIG_OCF_PATH 00:13:27.053 #define SPDK_CONFIG_OPENSSL_PATH 00:13:27.053 #undef SPDK_CONFIG_PGO_CAPTURE 00:13:27.053 #define SPDK_CONFIG_PGO_DIR 00:13:27.053 #undef SPDK_CONFIG_PGO_USE 00:13:27.053 #define SPDK_CONFIG_PREFIX /usr/local 00:13:27.053 #undef SPDK_CONFIG_RAID5F 00:13:27.053 #undef SPDK_CONFIG_RBD 00:13:27.053 #define SPDK_CONFIG_RDMA 1 00:13:27.053 #define SPDK_CONFIG_RDMA_PROV verbs 00:13:27.053 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:13:27.053 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:13:27.053 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:13:27.053 #define SPDK_CONFIG_SHARED 1 00:13:27.053 #undef SPDK_CONFIG_SMA 00:13:27.053 #define SPDK_CONFIG_TESTS 1 00:13:27.053 #undef SPDK_CONFIG_TSAN 00:13:27.053 #define SPDK_CONFIG_UBLK 1 00:13:27.053 #define SPDK_CONFIG_UBSAN 1 00:13:27.053 #undef SPDK_CONFIG_UNIT_TESTS 00:13:27.053 #undef SPDK_CONFIG_URING 00:13:27.053 #define SPDK_CONFIG_URING_PATH 00:13:27.053 #undef SPDK_CONFIG_URING_ZNS 00:13:27.053 #undef SPDK_CONFIG_USDT 00:13:27.053 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:13:27.053 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:13:27.053 #define SPDK_CONFIG_VFIO_USER 1 00:13:27.053 #define SPDK_CONFIG_VFIO_USER_DIR 00:13:27.053 #define SPDK_CONFIG_VHOST 1 00:13:27.053 #define SPDK_CONFIG_VIRTIO 1 00:13:27.053 #undef SPDK_CONFIG_VTUNE 00:13:27.053 #define SPDK_CONFIG_VTUNE_DIR 00:13:27.053 #define SPDK_CONFIG_WERROR 1 00:13:27.053 #define SPDK_CONFIG_WPDK_DIR 00:13:27.053 #undef SPDK_CONFIG_XNVME 00:13:27.053 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:13:27.053 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:13:27.315 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:27.316 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export PYTHONDONTWRITEBYTECODE=1 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # PYTHONDONTWRITEBYTECODE=1 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@197 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@203 -- # rm -rf /var/tmp/asan_suppression_file 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # cat 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # echo leak:libfuse3.so 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # '[' -z /var/spdk/dependencies ']' 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export DEPENDENCY_DIR 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@257 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # _LCOV_MAIN=0 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@266 -- # _LCOV_LLVM=1 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV= 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ '' == *clang* ]] 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # [[ 0 -eq 1 ]] 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # _lcov_opt[_LCOV_MAIN]= 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # lcov_opt= 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@276 -- # '[' 0 -eq 0 ']' 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # export valgrind= 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@277 -- # valgrind= 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # uname -s 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@283 -- # '[' Linux = Linux ']' 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@284 -- # HUGEMEM=4096 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # export CLEAR_HUGE=yes 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # CLEAR_HUGE=yes 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # MAKE=make 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@288 -- # MAKEFLAGS=-j48 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # export HUGEMEM=4096 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@304 -- # HUGEMEM=4096 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # NO_HUGE=() 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@307 -- # TEST_MODE= 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # for i in "$@" 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # case "$i" in 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@314 -- # TEST_TRANSPORT=tcp 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # [[ -z 2028889 ]] 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # kill -0 2028889 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1676 -- # set_test_storage 2147483648 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@339 -- # [[ -v testdir ]] 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # local requested_size=2147483648 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@342 -- # local mount target_dir 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local -A mounts fss sizes avails uses 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # local source fs size avail mount use 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local storage_fallback storage_candidates 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # mktemp -udt spdk.XXXXXX 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # storage_fallback=/tmp/spdk.EzFc6U 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@354 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # [[ -n '' ]] 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # [[ -n '' ]] 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.EzFc6U/tests/target /tmp/spdk.EzFc6U 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # requested_size=2214592512 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # df -T 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # grep -v Filesystem 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_devtmpfs 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=devtmpfs 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=67108864 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=67108864 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=0 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=/dev/pmem0 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=ext2 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=4096 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=5284429824 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=5284425728 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=spdk_root 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=overlay 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=50839908352 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=61988519936 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11148611584 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=30982893568 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994259968 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=11366400 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:27.317 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=12375265280 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=12397707264 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=22441984 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=29919776768 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=30994259968 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=1074483200 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mounts["$mount"]=tmpfs 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # fss["$mount"]=tmpfs 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # avails["$mount"]=6198837248 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # sizes["$mount"]=6198849536 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # uses["$mount"]=12288 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # read -r source fs size use avail _ mount 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # printf '* Looking for test storage...\n' 00:13:27.318 * Looking for test storage... 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # local target_space new_size 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # for target_dir in "${storage_candidates[@]}" 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # awk '$1 !~ /Filesystem/{print $6}' 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # mount=/ 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # target_space=50839908352 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@386 -- # (( target_space == 0 || target_space < requested_size )) 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # (( target_space >= requested_size )) 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == tmpfs ]] 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ overlay == ramfs ]] 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # [[ / == / ]] 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@392 -- # new_size=13363204096 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # (( new_size * 100 / sizes[/] > 95 )) 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@398 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@399 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # return 0 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set -o errtrace 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1679 -- # shopt -s extdebug 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # true 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # xtrace_fd 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lcov --version 00:13:27.318 06:23:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:13:27.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.318 --rc genhtml_branch_coverage=1 00:13:27.318 --rc genhtml_function_coverage=1 00:13:27.318 --rc genhtml_legend=1 00:13:27.318 --rc geninfo_all_blocks=1 00:13:27.318 --rc geninfo_unexecuted_blocks=1 00:13:27.318 00:13:27.318 ' 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:13:27.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.318 --rc genhtml_branch_coverage=1 00:13:27.318 --rc genhtml_function_coverage=1 00:13:27.318 --rc genhtml_legend=1 00:13:27.318 --rc geninfo_all_blocks=1 00:13:27.318 --rc geninfo_unexecuted_blocks=1 00:13:27.318 00:13:27.318 ' 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:13:27.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.318 --rc genhtml_branch_coverage=1 00:13:27.318 --rc genhtml_function_coverage=1 00:13:27.318 --rc genhtml_legend=1 00:13:27.318 --rc geninfo_all_blocks=1 00:13:27.318 --rc geninfo_unexecuted_blocks=1 00:13:27.318 00:13:27.318 ' 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:13:27.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:27.318 --rc genhtml_branch_coverage=1 00:13:27.318 --rc genhtml_function_coverage=1 00:13:27.318 --rc genhtml_legend=1 00:13:27.318 --rc geninfo_all_blocks=1 00:13:27.318 --rc geninfo_unexecuted_blocks=1 00:13:27.318 00:13:27.318 ' 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:27.318 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:13:27.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:13:27.319 06:23:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:13:29.939 Found 0000:09:00.0 (0x8086 - 0x159b) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:13:29.939 Found 0000:09:00.1 (0x8086 - 0x159b) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:13:29.939 Found net devices under 0000:09:00.0: cvl_0_0 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:13:29.939 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:13:29.940 Found net devices under 0000:09:00.1: cvl_0_1 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:13:29.940 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:29.940 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.225 ms 00:13:29.940 00:13:29.940 --- 10.0.0.2 ping statistics --- 00:13:29.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.940 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:29.940 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:29.940 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:13:29.940 00:13:29.940 --- 10.0.0.1 ping statistics --- 00:13:29.940 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:29.940 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:29.940 ************************************ 00:13:29.940 START TEST nvmf_filesystem_no_in_capsule 00:13:29.940 ************************************ 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 0 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2030734 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2030734 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2030734 ']' 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:29.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.940 [2024-11-20 06:24:01.412817] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:13:29.940 [2024-11-20 06:24:01.412912] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:29.940 [2024-11-20 06:24:01.488199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:29.940 [2024-11-20 06:24:01.551339] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:29.940 [2024-11-20 06:24:01.551408] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:29.940 [2024-11-20 06:24:01.551423] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:29.940 [2024-11-20 06:24:01.551450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:29.940 [2024-11-20 06:24:01.551460] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:29.940 [2024-11-20 06:24:01.553100] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:29.940 [2024-11-20 06:24:01.553182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:29.940 [2024-11-20 06:24:01.553161] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:29.940 [2024-11-20 06:24:01.553185] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.940 [2024-11-20 06:24:01.703961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:29.940 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.230 Malloc1 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.230 [2024-11-20 06:24:01.921055] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:30.230 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:13:30.230 { 00:13:30.230 "name": "Malloc1", 00:13:30.230 "aliases": [ 00:13:30.230 "e1163eea-5aa0-47e6-ad4e-04d25d3decef" 00:13:30.230 ], 00:13:30.230 "product_name": "Malloc disk", 00:13:30.230 "block_size": 512, 00:13:30.230 "num_blocks": 1048576, 00:13:30.230 "uuid": "e1163eea-5aa0-47e6-ad4e-04d25d3decef", 00:13:30.230 "assigned_rate_limits": { 00:13:30.230 "rw_ios_per_sec": 0, 00:13:30.230 "rw_mbytes_per_sec": 0, 00:13:30.230 "r_mbytes_per_sec": 0, 00:13:30.230 "w_mbytes_per_sec": 0 00:13:30.230 }, 00:13:30.230 "claimed": true, 00:13:30.230 "claim_type": "exclusive_write", 00:13:30.230 "zoned": false, 00:13:30.230 "supported_io_types": { 00:13:30.230 "read": true, 00:13:30.230 "write": true, 00:13:30.230 "unmap": true, 00:13:30.230 "flush": true, 00:13:30.230 "reset": true, 00:13:30.230 "nvme_admin": false, 00:13:30.230 "nvme_io": false, 00:13:30.230 "nvme_io_md": false, 00:13:30.230 "write_zeroes": true, 00:13:30.230 "zcopy": true, 00:13:30.230 "get_zone_info": false, 00:13:30.230 "zone_management": false, 00:13:30.230 "zone_append": false, 00:13:30.230 "compare": false, 00:13:30.230 "compare_and_write": false, 00:13:30.230 "abort": true, 00:13:30.230 "seek_hole": false, 00:13:30.230 "seek_data": false, 00:13:30.230 "copy": true, 00:13:30.231 "nvme_iov_md": false 00:13:30.231 }, 00:13:30.231 "memory_domains": [ 00:13:30.231 { 00:13:30.231 "dma_device_id": "system", 00:13:30.231 "dma_device_type": 1 00:13:30.231 }, 00:13:30.231 { 00:13:30.231 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.231 "dma_device_type": 2 00:13:30.231 } 00:13:30.231 ], 00:13:30.231 "driver_specific": {} 00:13:30.231 } 00:13:30.231 ]' 00:13:30.231 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:13:30.231 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:13:30.231 06:24:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:13:30.231 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:13:30.231 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:13:30.231 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:13:30.231 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:30.231 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:31.164 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:31.164 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:13:31.164 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:31.164 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:31.164 06:24:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:13:33.062 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:33.062 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:33.063 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:33.320 06:24:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:33.578 06:24:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:34.511 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:13:34.511 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:34.511 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:34.511 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.512 ************************************ 00:13:34.512 START TEST filesystem_ext4 00:13:34.512 ************************************ 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local force 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:13:34.512 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:34.512 mke2fs 1.47.0 (5-Feb-2023) 00:13:34.770 Discarding device blocks: 0/522240 done 00:13:34.770 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:34.770 Filesystem UUID: cc41a83a-c720-4952-8bb8-013b9e0b1555 00:13:34.770 Superblock backups stored on blocks: 00:13:34.770 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:34.770 00:13:34.770 Allocating group tables: 0/64 done 00:13:34.770 Writing inode tables: 0/64 done 00:13:34.770 Creating journal (8192 blocks): done 00:13:34.770 Writing superblocks and filesystem accounting information: 0/64 done 00:13:34.770 00:13:34.770 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@947 -- # return 0 00:13:34.770 06:24:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:40.032 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:40.032 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:40.032 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:40.032 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2030734 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:40.290 00:13:40.290 real 0m5.600s 00:13:40.290 user 0m0.019s 00:13:40.290 sys 0m0.067s 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:40.290 ************************************ 00:13:40.290 END TEST filesystem_ext4 00:13:40.290 ************************************ 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:40.290 ************************************ 00:13:40.290 START TEST filesystem_btrfs 00:13:40.290 ************************************ 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local force 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:13:40.290 06:24:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:40.548 btrfs-progs v6.8.1 00:13:40.548 See https://btrfs.readthedocs.io for more information. 00:13:40.548 00:13:40.548 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:40.548 NOTE: several default settings have changed in version 5.15, please make sure 00:13:40.548 this does not affect your deployments: 00:13:40.548 - DUP for metadata (-m dup) 00:13:40.548 - enabled no-holes (-O no-holes) 00:13:40.548 - enabled free-space-tree (-R free-space-tree) 00:13:40.548 00:13:40.548 Label: (null) 00:13:40.548 UUID: 77daf8be-236f-4979-acde-a9baa5d5d038 00:13:40.548 Node size: 16384 00:13:40.548 Sector size: 4096 (CPU page size: 4096) 00:13:40.548 Filesystem size: 510.00MiB 00:13:40.548 Block group profiles: 00:13:40.548 Data: single 8.00MiB 00:13:40.548 Metadata: DUP 32.00MiB 00:13:40.548 System: DUP 8.00MiB 00:13:40.548 SSD detected: yes 00:13:40.548 Zoned device: no 00:13:40.548 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:40.548 Checksum: crc32c 00:13:40.548 Number of devices: 1 00:13:40.548 Devices: 00:13:40.548 ID SIZE PATH 00:13:40.548 1 510.00MiB /dev/nvme0n1p1 00:13:40.548 00:13:40.548 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@947 -- # return 0 00:13:40.548 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:41.114 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:41.114 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:41.114 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:41.114 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:41.114 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:41.114 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2030734 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:41.373 00:13:41.373 real 0m1.007s 00:13:41.373 user 0m0.024s 00:13:41.373 sys 0m0.097s 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:41.373 ************************************ 00:13:41.373 END TEST filesystem_btrfs 00:13:41.373 ************************************ 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:41.373 06:24:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:41.373 ************************************ 00:13:41.373 START TEST filesystem_xfs 00:13:41.373 ************************************ 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local i=0 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local force 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@936 -- # force=-f 00:13:41.373 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:41.373 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:41.373 = sectsz=512 attr=2, projid32bit=1 00:13:41.373 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:41.373 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:41.373 data = bsize=4096 blocks=130560, imaxpct=25 00:13:41.373 = sunit=0 swidth=0 blks 00:13:41.373 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:41.373 log =internal log bsize=4096 blocks=16384, version=2 00:13:41.373 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:41.374 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:42.307 Discarding blocks...Done. 00:13:42.307 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@947 -- # return 0 00:13:42.307 06:24:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2030734 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:44.837 00:13:44.837 real 0m3.268s 00:13:44.837 user 0m0.019s 00:13:44.837 sys 0m0.056s 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:44.837 ************************************ 00:13:44.837 END TEST filesystem_xfs 00:13:44.837 ************************************ 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:44.837 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2030734 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2030734 ']' 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2030734 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # uname 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2030734 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2030734' 00:13:44.837 killing process with pid 2030734 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@971 -- # kill 2030734 00:13:44.837 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@976 -- # wait 2030734 00:13:45.403 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:45.403 00:13:45.403 real 0m15.640s 00:13:45.403 user 1m0.466s 00:13:45.403 sys 0m2.014s 00:13:45.403 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:45.403 06:24:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.403 ************************************ 00:13:45.403 END TEST nvmf_filesystem_no_in_capsule 00:13:45.403 ************************************ 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:45.403 ************************************ 00:13:45.403 START TEST nvmf_filesystem_in_capsule 00:13:45.403 ************************************ 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1127 -- # nvmf_filesystem_part 4096 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=2033248 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 2033248 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # '[' -z 2033248 ']' 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # local max_retries=100 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # xtrace_disable 00:13:45.403 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.403 [2024-11-20 06:24:17.111054] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:13:45.403 [2024-11-20 06:24:17.111143] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.403 [2024-11-20 06:24:17.188016] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:45.661 [2024-11-20 06:24:17.248963] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.661 [2024-11-20 06:24:17.249008] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.661 [2024-11-20 06:24:17.249036] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.661 [2024-11-20 06:24:17.249047] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.661 [2024-11-20 06:24:17.249057] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.661 [2024-11-20 06:24:17.250727] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:45.661 [2024-11-20 06:24:17.250792] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.661 [2024-11-20 06:24:17.250815] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.661 [2024-11-20 06:24:17.250819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@866 -- # return 0 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.661 [2024-11-20 06:24:17.398069] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.661 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.920 Malloc1 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.920 [2024-11-20 06:24:17.597118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bdev_name=Malloc1 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local bdev_info 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bs 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local nb 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # bdev_info='[ 00:13:45.920 { 00:13:45.920 "name": "Malloc1", 00:13:45.920 "aliases": [ 00:13:45.920 "5c0e5dc4-fdbb-4574-8cc9-5037f698a87e" 00:13:45.920 ], 00:13:45.920 "product_name": "Malloc disk", 00:13:45.920 "block_size": 512, 00:13:45.920 "num_blocks": 1048576, 00:13:45.920 "uuid": "5c0e5dc4-fdbb-4574-8cc9-5037f698a87e", 00:13:45.920 "assigned_rate_limits": { 00:13:45.920 "rw_ios_per_sec": 0, 00:13:45.920 "rw_mbytes_per_sec": 0, 00:13:45.920 "r_mbytes_per_sec": 0, 00:13:45.920 "w_mbytes_per_sec": 0 00:13:45.920 }, 00:13:45.920 "claimed": true, 00:13:45.920 "claim_type": "exclusive_write", 00:13:45.920 "zoned": false, 00:13:45.920 "supported_io_types": { 00:13:45.920 "read": true, 00:13:45.920 "write": true, 00:13:45.920 "unmap": true, 00:13:45.920 "flush": true, 00:13:45.920 "reset": true, 00:13:45.920 "nvme_admin": false, 00:13:45.920 "nvme_io": false, 00:13:45.920 "nvme_io_md": false, 00:13:45.920 "write_zeroes": true, 00:13:45.920 "zcopy": true, 00:13:45.920 "get_zone_info": false, 00:13:45.920 "zone_management": false, 00:13:45.920 "zone_append": false, 00:13:45.920 "compare": false, 00:13:45.920 "compare_and_write": false, 00:13:45.920 "abort": true, 00:13:45.920 "seek_hole": false, 00:13:45.920 "seek_data": false, 00:13:45.920 "copy": true, 00:13:45.920 "nvme_iov_md": false 00:13:45.920 }, 00:13:45.920 "memory_domains": [ 00:13:45.920 { 00:13:45.920 "dma_device_id": "system", 00:13:45.920 "dma_device_type": 1 00:13:45.920 }, 00:13:45.920 { 00:13:45.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:45.920 "dma_device_type": 2 00:13:45.920 } 00:13:45.920 ], 00:13:45.920 "driver_specific": {} 00:13:45.920 } 00:13:45.920 ]' 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # jq '.[] .block_size' 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # bs=512 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # jq '.[] .num_blocks' 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # nb=1048576 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1389 -- # bdev_size=512 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1390 -- # echo 512 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:45.920 06:24:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:46.856 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:46.856 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # local i=0 00:13:46.856 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:13:46.856 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:13:46.856 06:24:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # sleep 2 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # return 0 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:48.754 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:49.012 06:24:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:49.945 06:24:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:50.878 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:50.878 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:50.878 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:50.878 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:50.878 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:50.878 ************************************ 00:13:50.878 START TEST filesystem_in_capsule_ext4 00:13:50.878 ************************************ 00:13:50.878 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:50.878 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:50.878 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:50.878 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:50.879 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local fstype=ext4 00:13:50.879 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:50.879 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local i=0 00:13:50.879 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local force 00:13:50.879 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # '[' ext4 = ext4 ']' 00:13:50.879 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@934 -- # force=-F 00:13:50.879 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@939 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:50.879 mke2fs 1.47.0 (5-Feb-2023) 00:13:50.879 Discarding device blocks: 0/522240 done 00:13:50.879 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:50.879 Filesystem UUID: 26f1930e-f9d0-45db-a7e8-6156805f2359 00:13:50.879 Superblock backups stored on blocks: 00:13:50.879 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:50.879 00:13:50.879 Allocating group tables: 0/64 done 00:13:50.879 Writing inode tables: 0/64 done 00:13:50.879 Creating journal (8192 blocks): done 00:13:50.879 Writing superblocks and filesystem accounting information: 0/64 done 00:13:50.879 00:13:50.879 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@947 -- # return 0 00:13:50.879 06:24:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2033248 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:57.433 00:13:57.433 real 0m5.748s 00:13:57.433 user 0m0.017s 00:13:57.433 sys 0m0.057s 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:57.433 ************************************ 00:13:57.433 END TEST filesystem_in_capsule_ext4 00:13:57.433 ************************************ 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.433 ************************************ 00:13:57.433 START TEST filesystem_in_capsule_btrfs 00:13:57.433 ************************************ 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local fstype=btrfs 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local i=0 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local force 00:13:57.433 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # '[' btrfs = ext4 ']' 00:13:57.434 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@936 -- # force=-f 00:13:57.434 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@939 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:57.434 btrfs-progs v6.8.1 00:13:57.434 See https://btrfs.readthedocs.io for more information. 00:13:57.434 00:13:57.434 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:57.434 NOTE: several default settings have changed in version 5.15, please make sure 00:13:57.434 this does not affect your deployments: 00:13:57.434 - DUP for metadata (-m dup) 00:13:57.434 - enabled no-holes (-O no-holes) 00:13:57.434 - enabled free-space-tree (-R free-space-tree) 00:13:57.434 00:13:57.434 Label: (null) 00:13:57.434 UUID: 28a32912-6ff4-4397-a5b8-91b2cad3318f 00:13:57.434 Node size: 16384 00:13:57.434 Sector size: 4096 (CPU page size: 4096) 00:13:57.434 Filesystem size: 510.00MiB 00:13:57.434 Block group profiles: 00:13:57.434 Data: single 8.00MiB 00:13:57.434 Metadata: DUP 32.00MiB 00:13:57.434 System: DUP 8.00MiB 00:13:57.434 SSD detected: yes 00:13:57.434 Zoned device: no 00:13:57.434 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:57.434 Checksum: crc32c 00:13:57.434 Number of devices: 1 00:13:57.434 Devices: 00:13:57.434 ID SIZE PATH 00:13:57.434 1 510.00MiB /dev/nvme0n1p1 00:13:57.434 00:13:57.434 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@947 -- # return 0 00:13:57.434 06:24:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2033248 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:57.691 00:13:57.691 real 0m1.234s 00:13:57.691 user 0m0.015s 00:13:57.691 sys 0m0.100s 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:57.691 ************************************ 00:13:57.691 END TEST filesystem_in_capsule_btrfs 00:13:57.691 ************************************ 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1109 -- # xtrace_disable 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:57.691 ************************************ 00:13:57.691 START TEST filesystem_in_capsule_xfs 00:13:57.691 ************************************ 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1127 -- # nvmf_filesystem_create xfs nvme0n1 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local fstype=xfs 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local dev_name=/dev/nvme0n1p1 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local i=0 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local force 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # '[' xfs = ext4 ']' 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@936 -- # force=-f 00:13:57.691 06:24:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@939 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:57.949 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:57.949 = sectsz=512 attr=2, projid32bit=1 00:13:57.949 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:57.949 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:57.949 data = bsize=4096 blocks=130560, imaxpct=25 00:13:57.949 = sunit=0 swidth=0 blks 00:13:57.949 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:57.949 log =internal log bsize=4096 blocks=16384, version=2 00:13:57.949 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:57.949 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:58.881 Discarding blocks...Done. 00:13:58.881 06:24:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@947 -- # return 0 00:13:58.881 06:24:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:14:01.409 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:14:01.409 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:14:01.409 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:14:01.409 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:14:01.409 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:14:01.409 06:24:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2033248 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:14:01.409 00:14:01.409 real 0m3.500s 00:14:01.409 user 0m0.023s 00:14:01.409 sys 0m0.055s 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:14:01.409 ************************************ 00:14:01.409 END TEST filesystem_in_capsule_xfs 00:14:01.409 ************************************ 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:01.409 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1221 -- # local i=0 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:01.409 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1233 -- # return 0 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2033248 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # '[' -z 2033248 ']' 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # kill -0 2033248 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # uname 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2033248 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2033248' 00:14:01.667 killing process with pid 2033248 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@971 -- # kill 2033248 00:14:01.667 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@976 -- # wait 2033248 00:14:01.925 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:14:01.925 00:14:01.925 real 0m16.658s 00:14:01.925 user 1m4.388s 00:14:01.925 sys 0m2.133s 00:14:01.925 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:01.925 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:14:01.925 ************************************ 00:14:01.925 END TEST nvmf_filesystem_in_capsule 00:14:01.925 ************************************ 00:14:01.925 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:14:01.925 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:01.925 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:14:01.925 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:01.925 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:14:01.925 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:01.925 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:01.925 rmmod nvme_tcp 00:14:01.925 rmmod nvme_fabrics 00:14:02.182 rmmod nvme_keyring 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:02.182 06:24:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.102 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:04.102 00:14:04.102 real 0m37.154s 00:14:04.102 user 2m5.926s 00:14:04.102 sys 0m5.942s 00:14:04.102 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:04.102 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:14:04.102 ************************************ 00:14:04.102 END TEST nvmf_filesystem 00:14:04.102 ************************************ 00:14:04.102 06:24:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:04.102 06:24:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:04.102 06:24:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:04.102 06:24:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:04.102 ************************************ 00:14:04.102 START TEST nvmf_target_discovery 00:14:04.102 ************************************ 00:14:04.102 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:14:04.363 * Looking for test storage... 00:14:04.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:04.363 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:04.363 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:14:04.363 06:24:35 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:04.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.363 --rc genhtml_branch_coverage=1 00:14:04.363 --rc genhtml_function_coverage=1 00:14:04.363 --rc genhtml_legend=1 00:14:04.363 --rc geninfo_all_blocks=1 00:14:04.363 --rc geninfo_unexecuted_blocks=1 00:14:04.363 00:14:04.363 ' 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:04.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.363 --rc genhtml_branch_coverage=1 00:14:04.363 --rc genhtml_function_coverage=1 00:14:04.363 --rc genhtml_legend=1 00:14:04.363 --rc geninfo_all_blocks=1 00:14:04.363 --rc geninfo_unexecuted_blocks=1 00:14:04.363 00:14:04.363 ' 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:04.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.363 --rc genhtml_branch_coverage=1 00:14:04.363 --rc genhtml_function_coverage=1 00:14:04.363 --rc genhtml_legend=1 00:14:04.363 --rc geninfo_all_blocks=1 00:14:04.363 --rc geninfo_unexecuted_blocks=1 00:14:04.363 00:14:04.363 ' 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:04.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:04.363 --rc genhtml_branch_coverage=1 00:14:04.363 --rc genhtml_function_coverage=1 00:14:04.363 --rc genhtml_legend=1 00:14:04.363 --rc geninfo_all_blocks=1 00:14:04.363 --rc geninfo_unexecuted_blocks=1 00:14:04.363 00:14:04.363 ' 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.363 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:04.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:14:04.364 06:24:36 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.915 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:06.915 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:14:06.915 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:06.915 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:06.916 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:06.916 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:06.916 Found net devices under 0000:09:00.0: cvl_0_0 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:06.916 Found net devices under 0000:09:00.1: cvl_0_1 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:06.916 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:06.916 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:14:06.916 00:14:06.916 --- 10.0.0.2 ping statistics --- 00:14:06.916 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.916 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:14:06.916 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:06.916 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:06.916 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:14:06.916 00:14:06.916 --- 10.0.0.1 ping statistics --- 00:14:06.917 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:06.917 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=2037414 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 2037414 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # '[' -z 2037414 ']' 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.917 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 [2024-11-20 06:24:38.365065] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:14:06.917 [2024-11-20 06:24:38.365158] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.917 [2024-11-20 06:24:38.436059] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.917 [2024-11-20 06:24:38.494905] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.917 [2024-11-20 06:24:38.494956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.917 [2024-11-20 06:24:38.494984] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.917 [2024-11-20 06:24:38.494994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.917 [2024-11-20 06:24:38.495003] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.917 [2024-11-20 06:24:38.496514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.917 [2024-11-20 06:24:38.496571] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.917 [2024-11-20 06:24:38.496639] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.917 [2024-11-20 06:24:38.496642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@866 -- # return 0 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 [2024-11-20 06:24:38.637657] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 Null1 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 [2024-11-20 06:24:38.693475] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 Null2 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 Null3 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:06.917 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.176 Null4 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:14:07.176 00:14:07.176 Discovery Log Number of Records 6, Generation counter 6 00:14:07.176 =====Discovery Log Entry 0====== 00:14:07.176 trtype: tcp 00:14:07.176 adrfam: ipv4 00:14:07.176 subtype: current discovery subsystem 00:14:07.176 treq: not required 00:14:07.176 portid: 0 00:14:07.176 trsvcid: 4420 00:14:07.176 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:07.176 traddr: 10.0.0.2 00:14:07.176 eflags: explicit discovery connections, duplicate discovery information 00:14:07.176 sectype: none 00:14:07.176 =====Discovery Log Entry 1====== 00:14:07.176 trtype: tcp 00:14:07.176 adrfam: ipv4 00:14:07.176 subtype: nvme subsystem 00:14:07.176 treq: not required 00:14:07.176 portid: 0 00:14:07.176 trsvcid: 4420 00:14:07.176 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:07.176 traddr: 10.0.0.2 00:14:07.176 eflags: none 00:14:07.176 sectype: none 00:14:07.176 =====Discovery Log Entry 2====== 00:14:07.176 trtype: tcp 00:14:07.176 adrfam: ipv4 00:14:07.176 subtype: nvme subsystem 00:14:07.176 treq: not required 00:14:07.176 portid: 0 00:14:07.176 trsvcid: 4420 00:14:07.176 subnqn: nqn.2016-06.io.spdk:cnode2 00:14:07.176 traddr: 10.0.0.2 00:14:07.176 eflags: none 00:14:07.176 sectype: none 00:14:07.176 =====Discovery Log Entry 3====== 00:14:07.176 trtype: tcp 00:14:07.176 adrfam: ipv4 00:14:07.176 subtype: nvme subsystem 00:14:07.176 treq: not required 00:14:07.176 portid: 0 00:14:07.176 trsvcid: 4420 00:14:07.176 subnqn: nqn.2016-06.io.spdk:cnode3 00:14:07.176 traddr: 10.0.0.2 00:14:07.176 eflags: none 00:14:07.176 sectype: none 00:14:07.176 =====Discovery Log Entry 4====== 00:14:07.176 trtype: tcp 00:14:07.176 adrfam: ipv4 00:14:07.176 subtype: nvme subsystem 00:14:07.176 treq: not required 00:14:07.176 portid: 0 00:14:07.176 trsvcid: 4420 00:14:07.176 subnqn: nqn.2016-06.io.spdk:cnode4 00:14:07.176 traddr: 10.0.0.2 00:14:07.176 eflags: none 00:14:07.176 sectype: none 00:14:07.176 =====Discovery Log Entry 5====== 00:14:07.176 trtype: tcp 00:14:07.176 adrfam: ipv4 00:14:07.176 subtype: discovery subsystem referral 00:14:07.176 treq: not required 00:14:07.176 portid: 0 00:14:07.176 trsvcid: 4430 00:14:07.176 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:07.176 traddr: 10.0.0.2 00:14:07.176 eflags: none 00:14:07.176 sectype: none 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:14:07.176 Perform nvmf subsystem discovery via RPC 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.176 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.176 [ 00:14:07.176 { 00:14:07.176 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:07.176 "subtype": "Discovery", 00:14:07.176 "listen_addresses": [ 00:14:07.176 { 00:14:07.176 "trtype": "TCP", 00:14:07.176 "adrfam": "IPv4", 00:14:07.176 "traddr": "10.0.0.2", 00:14:07.176 "trsvcid": "4420" 00:14:07.176 } 00:14:07.176 ], 00:14:07.176 "allow_any_host": true, 00:14:07.176 "hosts": [] 00:14:07.176 }, 00:14:07.176 { 00:14:07.176 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:07.176 "subtype": "NVMe", 00:14:07.176 "listen_addresses": [ 00:14:07.176 { 00:14:07.176 "trtype": "TCP", 00:14:07.176 "adrfam": "IPv4", 00:14:07.176 "traddr": "10.0.0.2", 00:14:07.176 "trsvcid": "4420" 00:14:07.176 } 00:14:07.176 ], 00:14:07.176 "allow_any_host": true, 00:14:07.176 "hosts": [], 00:14:07.176 "serial_number": "SPDK00000000000001", 00:14:07.176 "model_number": "SPDK bdev Controller", 00:14:07.176 "max_namespaces": 32, 00:14:07.176 "min_cntlid": 1, 00:14:07.176 "max_cntlid": 65519, 00:14:07.176 "namespaces": [ 00:14:07.176 { 00:14:07.176 "nsid": 1, 00:14:07.176 "bdev_name": "Null1", 00:14:07.176 "name": "Null1", 00:14:07.176 "nguid": "3486F614740B47F0AAAEA56ABE863AAD", 00:14:07.176 "uuid": "3486f614-740b-47f0-aaae-a56abe863aad" 00:14:07.176 } 00:14:07.176 ] 00:14:07.176 }, 00:14:07.176 { 00:14:07.176 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:14:07.176 "subtype": "NVMe", 00:14:07.176 "listen_addresses": [ 00:14:07.176 { 00:14:07.176 "trtype": "TCP", 00:14:07.176 "adrfam": "IPv4", 00:14:07.176 "traddr": "10.0.0.2", 00:14:07.176 "trsvcid": "4420" 00:14:07.176 } 00:14:07.176 ], 00:14:07.176 "allow_any_host": true, 00:14:07.176 "hosts": [], 00:14:07.176 "serial_number": "SPDK00000000000002", 00:14:07.176 "model_number": "SPDK bdev Controller", 00:14:07.176 "max_namespaces": 32, 00:14:07.176 "min_cntlid": 1, 00:14:07.176 "max_cntlid": 65519, 00:14:07.176 "namespaces": [ 00:14:07.176 { 00:14:07.176 "nsid": 1, 00:14:07.176 "bdev_name": "Null2", 00:14:07.176 "name": "Null2", 00:14:07.177 "nguid": "F50AF9DC9FCC418F8E648E57B5CA4668", 00:14:07.177 "uuid": "f50af9dc-9fcc-418f-8e64-8e57b5ca4668" 00:14:07.177 } 00:14:07.177 ] 00:14:07.177 }, 00:14:07.177 { 00:14:07.177 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:14:07.177 "subtype": "NVMe", 00:14:07.177 "listen_addresses": [ 00:14:07.177 { 00:14:07.177 "trtype": "TCP", 00:14:07.177 "adrfam": "IPv4", 00:14:07.177 "traddr": "10.0.0.2", 00:14:07.177 "trsvcid": "4420" 00:14:07.177 } 00:14:07.177 ], 00:14:07.177 "allow_any_host": true, 00:14:07.177 "hosts": [], 00:14:07.177 "serial_number": "SPDK00000000000003", 00:14:07.177 "model_number": "SPDK bdev Controller", 00:14:07.177 "max_namespaces": 32, 00:14:07.177 "min_cntlid": 1, 00:14:07.177 "max_cntlid": 65519, 00:14:07.177 "namespaces": [ 00:14:07.177 { 00:14:07.177 "nsid": 1, 00:14:07.177 "bdev_name": "Null3", 00:14:07.177 "name": "Null3", 00:14:07.177 "nguid": "6F528A2FC8FC40A5B79609B5DAD67265", 00:14:07.177 "uuid": "6f528a2f-c8fc-40a5-b796-09b5dad67265" 00:14:07.177 } 00:14:07.177 ] 00:14:07.177 }, 00:14:07.177 { 00:14:07.177 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:14:07.177 "subtype": "NVMe", 00:14:07.177 "listen_addresses": [ 00:14:07.177 { 00:14:07.177 "trtype": "TCP", 00:14:07.177 "adrfam": "IPv4", 00:14:07.177 "traddr": "10.0.0.2", 00:14:07.177 "trsvcid": "4420" 00:14:07.177 } 00:14:07.177 ], 00:14:07.177 "allow_any_host": true, 00:14:07.177 "hosts": [], 00:14:07.177 "serial_number": "SPDK00000000000004", 00:14:07.177 "model_number": "SPDK bdev Controller", 00:14:07.177 "max_namespaces": 32, 00:14:07.177 "min_cntlid": 1, 00:14:07.177 "max_cntlid": 65519, 00:14:07.177 "namespaces": [ 00:14:07.177 { 00:14:07.177 "nsid": 1, 00:14:07.177 "bdev_name": "Null4", 00:14:07.177 "name": "Null4", 00:14:07.177 "nguid": "4263CB1F113340FD97B86C175CD084FE", 00:14:07.177 "uuid": "4263cb1f-1133-40fd-97b8-6c175cd084fe" 00:14:07.177 } 00:14:07.177 ] 00:14:07.177 } 00:14:07.177 ] 00:14:07.177 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.177 06:24:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:14:07.177 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:07.177 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:07.177 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.177 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:07.435 rmmod nvme_tcp 00:14:07.435 rmmod nvme_fabrics 00:14:07.435 rmmod nvme_keyring 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 2037414 ']' 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 2037414 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # '[' -z 2037414 ']' 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # kill -0 2037414 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # uname 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2037414 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2037414' 00:14:07.435 killing process with pid 2037414 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@971 -- # kill 2037414 00:14:07.435 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@976 -- # wait 2037414 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:07.694 06:24:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:10.234 00:14:10.234 real 0m5.603s 00:14:10.234 user 0m4.666s 00:14:10.234 sys 0m1.948s 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:14:10.234 ************************************ 00:14:10.234 END TEST nvmf_target_discovery 00:14:10.234 ************************************ 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:10.234 ************************************ 00:14:10.234 START TEST nvmf_referrals 00:14:10.234 ************************************ 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:14:10.234 * Looking for test storage... 00:14:10.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lcov --version 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:10.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.234 --rc genhtml_branch_coverage=1 00:14:10.234 --rc genhtml_function_coverage=1 00:14:10.234 --rc genhtml_legend=1 00:14:10.234 --rc geninfo_all_blocks=1 00:14:10.234 --rc geninfo_unexecuted_blocks=1 00:14:10.234 00:14:10.234 ' 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:10.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.234 --rc genhtml_branch_coverage=1 00:14:10.234 --rc genhtml_function_coverage=1 00:14:10.234 --rc genhtml_legend=1 00:14:10.234 --rc geninfo_all_blocks=1 00:14:10.234 --rc geninfo_unexecuted_blocks=1 00:14:10.234 00:14:10.234 ' 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:10.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.234 --rc genhtml_branch_coverage=1 00:14:10.234 --rc genhtml_function_coverage=1 00:14:10.234 --rc genhtml_legend=1 00:14:10.234 --rc geninfo_all_blocks=1 00:14:10.234 --rc geninfo_unexecuted_blocks=1 00:14:10.234 00:14:10.234 ' 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:10.234 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:10.234 --rc genhtml_branch_coverage=1 00:14:10.234 --rc genhtml_function_coverage=1 00:14:10.234 --rc genhtml_legend=1 00:14:10.234 --rc geninfo_all_blocks=1 00:14:10.234 --rc geninfo_unexecuted_blocks=1 00:14:10.234 00:14:10.234 ' 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:10.234 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:10.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:14:10.235 06:24:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:12.139 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:12.140 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:12.140 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:12.140 Found net devices under 0000:09:00.0: cvl_0_0 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:12.140 Found net devices under 0000:09:00.1: cvl_0_1 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:12.140 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:12.398 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:12.399 06:24:43 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:12.399 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:12.399 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:14:12.399 00:14:12.399 --- 10.0.0.2 ping statistics --- 00:14:12.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.399 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:12.399 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:12.399 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:14:12.399 00:14:12.399 --- 10.0.0.1 ping statistics --- 00:14:12.399 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:12.399 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=2039516 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 2039516 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # '[' -z 2039516 ']' 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:12.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:12.399 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.399 [2024-11-20 06:24:44.130101] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:14:12.399 [2024-11-20 06:24:44.130203] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:12.399 [2024-11-20 06:24:44.199962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:12.657 [2024-11-20 06:24:44.257022] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:12.657 [2024-11-20 06:24:44.257071] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:12.657 [2024-11-20 06:24:44.257098] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:12.657 [2024-11-20 06:24:44.257109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:12.657 [2024-11-20 06:24:44.257118] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:12.657 [2024-11-20 06:24:44.258756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:12.657 [2024-11-20 06:24:44.258822] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:12.657 [2024-11-20 06:24:44.258889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:12.657 [2024-11-20 06:24:44.258892] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@866 -- # return 0 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.657 [2024-11-20 06:24:44.413167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.657 [2024-11-20 06:24:44.442520] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.657 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.915 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:13.173 06:24:44 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:13.431 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:13.689 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:14:13.689 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:14:13.689 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:14:13.689 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:13.689 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:13.689 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:13.947 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:14:14.205 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:14:14.205 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:14:14.205 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:14:14.205 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:14:14.205 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:14.205 06:24:45 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 8009 -o json 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:14:14.464 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:14.722 rmmod nvme_tcp 00:14:14.722 rmmod nvme_fabrics 00:14:14.722 rmmod nvme_keyring 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 2039516 ']' 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 2039516 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # '[' -z 2039516 ']' 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # kill -0 2039516 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # uname 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2039516 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2039516' 00:14:14.722 killing process with pid 2039516 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@971 -- # kill 2039516 00:14:14.722 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@976 -- # wait 2039516 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:14.982 06:24:46 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:16.887 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:16.887 00:14:16.887 real 0m7.143s 00:14:16.887 user 0m10.993s 00:14:16.887 sys 0m2.404s 00:14:16.887 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:16.887 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:14:16.887 ************************************ 00:14:16.887 END TEST nvmf_referrals 00:14:16.887 ************************************ 00:14:16.887 06:24:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:16.887 06:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:16.887 06:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:16.887 06:24:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:17.147 ************************************ 00:14:17.147 START TEST nvmf_connect_disconnect 00:14:17.147 ************************************ 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:14:17.147 * Looking for test storage... 00:14:17.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:17.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.147 --rc genhtml_branch_coverage=1 00:14:17.147 --rc genhtml_function_coverage=1 00:14:17.147 --rc genhtml_legend=1 00:14:17.147 --rc geninfo_all_blocks=1 00:14:17.147 --rc geninfo_unexecuted_blocks=1 00:14:17.147 00:14:17.147 ' 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:17.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.147 --rc genhtml_branch_coverage=1 00:14:17.147 --rc genhtml_function_coverage=1 00:14:17.147 --rc genhtml_legend=1 00:14:17.147 --rc geninfo_all_blocks=1 00:14:17.147 --rc geninfo_unexecuted_blocks=1 00:14:17.147 00:14:17.147 ' 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:17.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.147 --rc genhtml_branch_coverage=1 00:14:17.147 --rc genhtml_function_coverage=1 00:14:17.147 --rc genhtml_legend=1 00:14:17.147 --rc geninfo_all_blocks=1 00:14:17.147 --rc geninfo_unexecuted_blocks=1 00:14:17.147 00:14:17.147 ' 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:17.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:17.147 --rc genhtml_branch_coverage=1 00:14:17.147 --rc genhtml_function_coverage=1 00:14:17.147 --rc genhtml_legend=1 00:14:17.147 --rc geninfo_all_blocks=1 00:14:17.147 --rc geninfo_unexecuted_blocks=1 00:14:17.147 00:14:17.147 ' 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:17.147 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:17.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:14:17.148 06:24:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:19.716 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:19.716 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:19.716 Found net devices under 0000:09:00.0: cvl_0_0 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:19.716 Found net devices under 0000:09:00.1: cvl_0_1 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:19.716 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:19.717 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:19.717 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:19.717 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:19.717 06:24:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:19.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:19.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:14:19.717 00:14:19.717 --- 10.0.0.2 ping statistics --- 00:14:19.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.717 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:19.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:19.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:14:19.717 00:14:19.717 --- 10.0.0.1 ping statistics --- 00:14:19.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:19.717 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=2041819 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 2041819 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # '[' -z 2041819 ']' 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:19.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:19.717 [2024-11-20 06:24:51.189771] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:14:19.717 [2024-11-20 06:24:51.189861] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:19.717 [2024-11-20 06:24:51.264828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:19.717 [2024-11-20 06:24:51.325817] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:19.717 [2024-11-20 06:24:51.325870] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:19.717 [2024-11-20 06:24:51.325899] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:19.717 [2024-11-20 06:24:51.325911] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:19.717 [2024-11-20 06:24:51.325920] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:19.717 [2024-11-20 06:24:51.327541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:19.717 [2024-11-20 06:24:51.327572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:19.717 [2024-11-20 06:24:51.327630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:19.717 [2024-11-20 06:24:51.327633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@866 -- # return 0 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:19.717 [2024-11-20 06:24:51.490383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.717 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:19.974 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.974 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:19.974 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.974 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:19.974 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:19.974 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:19.974 [2024-11-20 06:24:51.561158] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:19.974 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:19.974 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:19.974 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:19.974 06:24:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:22.502 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.783 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:28.307 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:30.835 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.118 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:34.118 rmmod nvme_tcp 00:14:34.118 rmmod nvme_fabrics 00:14:34.118 rmmod nvme_keyring 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 2041819 ']' 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 2041819 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2041819 ']' 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # kill -0 2041819 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # uname 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2041819 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2041819' 00:14:34.118 killing process with pid 2041819 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@971 -- # kill 2041819 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@976 -- # wait 2041819 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:34.118 06:25:05 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.024 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:36.024 00:14:36.024 real 0m19.097s 00:14:36.024 user 0m57.536s 00:14:36.024 sys 0m3.334s 00:14:36.024 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:36.024 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:36.024 ************************************ 00:14:36.024 END TEST nvmf_connect_disconnect 00:14:36.024 ************************************ 00:14:36.283 06:25:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:36.283 06:25:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:36.283 06:25:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:36.283 06:25:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:36.283 ************************************ 00:14:36.283 START TEST nvmf_multitarget 00:14:36.283 ************************************ 00:14:36.283 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:36.283 * Looking for test storage... 00:14:36.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:36.283 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:36.283 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lcov --version 00:14:36.283 06:25:07 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:36.283 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:36.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.284 --rc genhtml_branch_coverage=1 00:14:36.284 --rc genhtml_function_coverage=1 00:14:36.284 --rc genhtml_legend=1 00:14:36.284 --rc geninfo_all_blocks=1 00:14:36.284 --rc geninfo_unexecuted_blocks=1 00:14:36.284 00:14:36.284 ' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:36.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.284 --rc genhtml_branch_coverage=1 00:14:36.284 --rc genhtml_function_coverage=1 00:14:36.284 --rc genhtml_legend=1 00:14:36.284 --rc geninfo_all_blocks=1 00:14:36.284 --rc geninfo_unexecuted_blocks=1 00:14:36.284 00:14:36.284 ' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:36.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.284 --rc genhtml_branch_coverage=1 00:14:36.284 --rc genhtml_function_coverage=1 00:14:36.284 --rc genhtml_legend=1 00:14:36.284 --rc geninfo_all_blocks=1 00:14:36.284 --rc geninfo_unexecuted_blocks=1 00:14:36.284 00:14:36.284 ' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:36.284 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:36.284 --rc genhtml_branch_coverage=1 00:14:36.284 --rc genhtml_function_coverage=1 00:14:36.284 --rc genhtml_legend=1 00:14:36.284 --rc geninfo_all_blocks=1 00:14:36.284 --rc geninfo_unexecuted_blocks=1 00:14:36.284 00:14:36.284 ' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:36.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:14:36.284 06:25:08 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:14:38.820 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:38.821 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:38.821 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:38.821 Found net devices under 0000:09:00.0: cvl_0_0 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:38.821 Found net devices under 0000:09:00.1: cvl_0_1 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:38.821 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:38.821 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:38.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:14:38.822 00:14:38.822 --- 10.0.0.2 ping statistics --- 00:14:38.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.822 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:38.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:38.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:14:38.822 00:14:38.822 --- 10.0.0.1 ping statistics --- 00:14:38.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:38.822 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=2045586 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 2045586 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # '[' -z 2045586 ']' 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:38.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:38.822 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:38.822 [2024-11-20 06:25:10.435047] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:14:38.822 [2024-11-20 06:25:10.435157] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:38.822 [2024-11-20 06:25:10.510861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:38.822 [2024-11-20 06:25:10.569157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:38.822 [2024-11-20 06:25:10.569206] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:38.822 [2024-11-20 06:25:10.569234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:38.822 [2024-11-20 06:25:10.569251] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:38.822 [2024-11-20 06:25:10.569261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:38.822 [2024-11-20 06:25:10.570849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:38.822 [2024-11-20 06:25:10.570908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:38.822 [2024-11-20 06:25:10.570946] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:38.822 [2024-11-20 06:25:10.570949] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@866 -- # return 0 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:39.080 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:39.366 "nvmf_tgt_1" 00:14:39.366 06:25:10 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:39.366 "nvmf_tgt_2" 00:14:39.366 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:39.366 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:39.652 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:39.652 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:39.652 true 00:14:39.652 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:39.652 true 00:14:39.652 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:39.652 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:14:39.910 rmmod nvme_tcp 00:14:39.910 rmmod nvme_fabrics 00:14:39.910 rmmod nvme_keyring 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 2045586 ']' 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 2045586 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # '[' -z 2045586 ']' 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # kill -0 2045586 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # uname 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2045586 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2045586' 00:14:39.910 killing process with pid 2045586 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@971 -- # kill 2045586 00:14:39.910 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@976 -- # wait 2045586 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:40.170 06:25:11 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.710 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:14:42.710 00:14:42.710 real 0m6.065s 00:14:42.710 user 0m7.060s 00:14:42.710 sys 0m2.092s 00:14:42.710 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1128 -- # xtrace_disable 00:14:42.711 06:25:13 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:42.711 ************************************ 00:14:42.711 END TEST nvmf_multitarget 00:14:42.711 ************************************ 00:14:42.711 06:25:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:42.711 06:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:14:42.711 06:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:14:42.711 06:25:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:42.711 ************************************ 00:14:42.711 START TEST nvmf_rpc 00:14:42.711 ************************************ 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:42.711 * Looking for test storage... 00:14:42.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lcov --version 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:14:42.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.711 --rc genhtml_branch_coverage=1 00:14:42.711 --rc genhtml_function_coverage=1 00:14:42.711 --rc genhtml_legend=1 00:14:42.711 --rc geninfo_all_blocks=1 00:14:42.711 --rc geninfo_unexecuted_blocks=1 00:14:42.711 00:14:42.711 ' 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:14:42.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.711 --rc genhtml_branch_coverage=1 00:14:42.711 --rc genhtml_function_coverage=1 00:14:42.711 --rc genhtml_legend=1 00:14:42.711 --rc geninfo_all_blocks=1 00:14:42.711 --rc geninfo_unexecuted_blocks=1 00:14:42.711 00:14:42.711 ' 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:14:42.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.711 --rc genhtml_branch_coverage=1 00:14:42.711 --rc genhtml_function_coverage=1 00:14:42.711 --rc genhtml_legend=1 00:14:42.711 --rc geninfo_all_blocks=1 00:14:42.711 --rc geninfo_unexecuted_blocks=1 00:14:42.711 00:14:42.711 ' 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:14:42.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:42.711 --rc genhtml_branch_coverage=1 00:14:42.711 --rc genhtml_function_coverage=1 00:14:42.711 --rc genhtml_legend=1 00:14:42.711 --rc geninfo_all_blocks=1 00:14:42.711 --rc geninfo_unexecuted_blocks=1 00:14:42.711 00:14:42.711 ' 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:14:42.711 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:14:42.712 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:14:42.712 06:25:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:14:44.617 Found 0000:09:00.0 (0x8086 - 0x159b) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:14:44.617 Found 0000:09:00.1 (0x8086 - 0x159b) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:14:44.617 Found net devices under 0000:09:00.0: cvl_0_0 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:14:44.617 Found net devices under 0000:09:00.1: cvl_0_1 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.617 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:14:44.618 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.618 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:14:44.618 00:14:44.618 --- 10.0.0.2 ping statistics --- 00:14:44.618 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.618 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:14:44.618 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:44.877 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.877 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:14:44.877 00:14:44.877 --- 10.0.0.1 ping statistics --- 00:14:44.877 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.877 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=2047697 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 2047697 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # '[' -z 2047697 ']' 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # local max_retries=100 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # xtrace_disable 00:14:44.877 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.877 [2024-11-20 06:25:16.536785] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:14:44.877 [2024-11-20 06:25:16.536878] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:44.877 [2024-11-20 06:25:16.607173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:44.877 [2024-11-20 06:25:16.663813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:44.877 [2024-11-20 06:25:16.663878] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:44.877 [2024-11-20 06:25:16.663891] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:44.877 [2024-11-20 06:25:16.663902] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:44.877 [2024-11-20 06:25:16.663911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:44.877 [2024-11-20 06:25:16.665595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:44.877 [2024-11-20 06:25:16.665687] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:44.877 [2024-11-20 06:25:16.665753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:44.877 [2024-11-20 06:25:16.665757] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@866 -- # return 0 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:45.136 "tick_rate": 2700000000, 00:14:45.136 "poll_groups": [ 00:14:45.136 { 00:14:45.136 "name": "nvmf_tgt_poll_group_000", 00:14:45.136 "admin_qpairs": 0, 00:14:45.136 "io_qpairs": 0, 00:14:45.136 "current_admin_qpairs": 0, 00:14:45.136 "current_io_qpairs": 0, 00:14:45.136 "pending_bdev_io": 0, 00:14:45.136 "completed_nvme_io": 0, 00:14:45.136 "transports": [] 00:14:45.136 }, 00:14:45.136 { 00:14:45.136 "name": "nvmf_tgt_poll_group_001", 00:14:45.136 "admin_qpairs": 0, 00:14:45.136 "io_qpairs": 0, 00:14:45.136 "current_admin_qpairs": 0, 00:14:45.136 "current_io_qpairs": 0, 00:14:45.136 "pending_bdev_io": 0, 00:14:45.136 "completed_nvme_io": 0, 00:14:45.136 "transports": [] 00:14:45.136 }, 00:14:45.136 { 00:14:45.136 "name": "nvmf_tgt_poll_group_002", 00:14:45.136 "admin_qpairs": 0, 00:14:45.136 "io_qpairs": 0, 00:14:45.136 "current_admin_qpairs": 0, 00:14:45.136 "current_io_qpairs": 0, 00:14:45.136 "pending_bdev_io": 0, 00:14:45.136 "completed_nvme_io": 0, 00:14:45.136 "transports": [] 00:14:45.136 }, 00:14:45.136 { 00:14:45.136 "name": "nvmf_tgt_poll_group_003", 00:14:45.136 "admin_qpairs": 0, 00:14:45.136 "io_qpairs": 0, 00:14:45.136 "current_admin_qpairs": 0, 00:14:45.136 "current_io_qpairs": 0, 00:14:45.136 "pending_bdev_io": 0, 00:14:45.136 "completed_nvme_io": 0, 00:14:45.136 "transports": [] 00:14:45.136 } 00:14:45.136 ] 00:14:45.136 }' 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.136 [2024-11-20 06:25:16.912635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:45.136 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.137 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.137 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.137 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:45.137 "tick_rate": 2700000000, 00:14:45.137 "poll_groups": [ 00:14:45.137 { 00:14:45.137 "name": "nvmf_tgt_poll_group_000", 00:14:45.137 "admin_qpairs": 0, 00:14:45.137 "io_qpairs": 0, 00:14:45.137 "current_admin_qpairs": 0, 00:14:45.137 "current_io_qpairs": 0, 00:14:45.137 "pending_bdev_io": 0, 00:14:45.137 "completed_nvme_io": 0, 00:14:45.137 "transports": [ 00:14:45.137 { 00:14:45.137 "trtype": "TCP" 00:14:45.137 } 00:14:45.137 ] 00:14:45.137 }, 00:14:45.137 { 00:14:45.137 "name": "nvmf_tgt_poll_group_001", 00:14:45.137 "admin_qpairs": 0, 00:14:45.137 "io_qpairs": 0, 00:14:45.137 "current_admin_qpairs": 0, 00:14:45.137 "current_io_qpairs": 0, 00:14:45.137 "pending_bdev_io": 0, 00:14:45.137 "completed_nvme_io": 0, 00:14:45.137 "transports": [ 00:14:45.137 { 00:14:45.137 "trtype": "TCP" 00:14:45.137 } 00:14:45.137 ] 00:14:45.137 }, 00:14:45.137 { 00:14:45.137 "name": "nvmf_tgt_poll_group_002", 00:14:45.137 "admin_qpairs": 0, 00:14:45.137 "io_qpairs": 0, 00:14:45.137 "current_admin_qpairs": 0, 00:14:45.137 "current_io_qpairs": 0, 00:14:45.137 "pending_bdev_io": 0, 00:14:45.137 "completed_nvme_io": 0, 00:14:45.137 "transports": [ 00:14:45.137 { 00:14:45.137 "trtype": "TCP" 00:14:45.137 } 00:14:45.137 ] 00:14:45.137 }, 00:14:45.137 { 00:14:45.137 "name": "nvmf_tgt_poll_group_003", 00:14:45.137 "admin_qpairs": 0, 00:14:45.137 "io_qpairs": 0, 00:14:45.137 "current_admin_qpairs": 0, 00:14:45.137 "current_io_qpairs": 0, 00:14:45.137 "pending_bdev_io": 0, 00:14:45.137 "completed_nvme_io": 0, 00:14:45.137 "transports": [ 00:14:45.137 { 00:14:45.137 "trtype": "TCP" 00:14:45.137 } 00:14:45.137 ] 00:14:45.137 } 00:14:45.137 ] 00:14:45.137 }' 00:14:45.137 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:45.137 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:45.137 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:45.137 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:45.396 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:45.396 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:45.396 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:45.396 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:45.396 06:25:16 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.396 Malloc1 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.396 [2024-11-20 06:25:17.076153] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.2 -s 4420 00:14:45.396 [2024-11-20 06:25:17.098686] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:14:45.396 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:45.396 could not add new controller: failed to write to nvme-fabrics device 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.396 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:45.397 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.397 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:46.331 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:46.331 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:46.331 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:46.331 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:46.331 06:25:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:48.231 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:14:48.231 06:25:19 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.231 [2024-11-20 06:25:19.988116] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a' 00:14:48.231 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:48.231 could not add new controller: failed to write to nvme-fabrics device 00:14:48.231 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:14:48.231 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:14:48.231 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:14:48.231 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:14:48.231 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:48.231 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.231 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:48.231 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.231 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:48.798 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:48.798 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:48.798 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:48.798 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:48.798 06:25:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.328 [2024-11-20 06:25:22.755604] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:51.328 06:25:22 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:51.587 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:51.587 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:51.587 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:51.587 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:51.587 06:25:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.119 [2024-11-20 06:25:25.549603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.119 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.120 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.120 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:54.120 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.120 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.120 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.120 06:25:25 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:54.378 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:54.378 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:54.378 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:54.378 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:54.378 06:25:26 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:56.906 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:56.906 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:56.906 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:56.906 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:56.907 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.907 [2024-11-20 06:25:28.329696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:56.907 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:57.165 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:57.165 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:14:57.165 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.165 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:14:57.165 06:25:28 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:14:59.694 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:14:59.694 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:14:59.694 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:14:59.694 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:14:59.694 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:14:59.694 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:14:59.694 06:25:30 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:59.694 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.694 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 [2024-11-20 06:25:31.120422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.695 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:00.260 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:00.260 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:00.260 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:00.260 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:00.260 06:25:31 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:02.160 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:02.160 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:02.160 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:02.160 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:02.161 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.161 [2024-11-20 06:25:33.977097] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.161 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:02.419 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.419 06:25:33 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:02.984 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:02.984 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # local i=0 00:15:02.984 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.984 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:02.984 06:25:34 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # sleep 2 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # return 0 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:04.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1221 -- # local i=0 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1233 -- # return 0 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.886 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 [2024-11-20 06:25:36.721601] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 [2024-11-20 06:25:36.769661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 [2024-11-20 06:25:36.817815] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 [2024-11-20 06:25:36.865932] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.145 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.146 [2024-11-20 06:25:36.914093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:05.146 "tick_rate": 2700000000, 00:15:05.146 "poll_groups": [ 00:15:05.146 { 00:15:05.146 "name": "nvmf_tgt_poll_group_000", 00:15:05.146 "admin_qpairs": 2, 00:15:05.146 "io_qpairs": 84, 00:15:05.146 "current_admin_qpairs": 0, 00:15:05.146 "current_io_qpairs": 0, 00:15:05.146 "pending_bdev_io": 0, 00:15:05.146 "completed_nvme_io": 276, 00:15:05.146 "transports": [ 00:15:05.146 { 00:15:05.146 "trtype": "TCP" 00:15:05.146 } 00:15:05.146 ] 00:15:05.146 }, 00:15:05.146 { 00:15:05.146 "name": "nvmf_tgt_poll_group_001", 00:15:05.146 "admin_qpairs": 2, 00:15:05.146 "io_qpairs": 84, 00:15:05.146 "current_admin_qpairs": 0, 00:15:05.146 "current_io_qpairs": 0, 00:15:05.146 "pending_bdev_io": 0, 00:15:05.146 "completed_nvme_io": 135, 00:15:05.146 "transports": [ 00:15:05.146 { 00:15:05.146 "trtype": "TCP" 00:15:05.146 } 00:15:05.146 ] 00:15:05.146 }, 00:15:05.146 { 00:15:05.146 "name": "nvmf_tgt_poll_group_002", 00:15:05.146 "admin_qpairs": 1, 00:15:05.146 "io_qpairs": 84, 00:15:05.146 "current_admin_qpairs": 0, 00:15:05.146 "current_io_qpairs": 0, 00:15:05.146 "pending_bdev_io": 0, 00:15:05.146 "completed_nvme_io": 139, 00:15:05.146 "transports": [ 00:15:05.146 { 00:15:05.146 "trtype": "TCP" 00:15:05.146 } 00:15:05.146 ] 00:15:05.146 }, 00:15:05.146 { 00:15:05.146 "name": "nvmf_tgt_poll_group_003", 00:15:05.146 "admin_qpairs": 2, 00:15:05.146 "io_qpairs": 84, 00:15:05.146 "current_admin_qpairs": 0, 00:15:05.146 "current_io_qpairs": 0, 00:15:05.146 "pending_bdev_io": 0, 00:15:05.146 "completed_nvme_io": 136, 00:15:05.146 "transports": [ 00:15:05.146 { 00:15:05.146 "trtype": "TCP" 00:15:05.146 } 00:15:05.146 ] 00:15:05.146 } 00:15:05.146 ] 00:15:05.146 }' 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:05.146 06:25:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:05.405 rmmod nvme_tcp 00:15:05.405 rmmod nvme_fabrics 00:15:05.405 rmmod nvme_keyring 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 2047697 ']' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 2047697 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # '[' -z 2047697 ']' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # kill -0 2047697 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # uname 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2047697 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2047697' 00:15:05.405 killing process with pid 2047697 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@971 -- # kill 2047697 00:15:05.405 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@976 -- # wait 2047697 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:05.664 06:25:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:08.202 00:15:08.202 real 0m25.470s 00:15:08.202 user 1m22.364s 00:15:08.202 sys 0m4.380s 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:08.202 ************************************ 00:15:08.202 END TEST nvmf_rpc 00:15:08.202 ************************************ 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:08.202 ************************************ 00:15:08.202 START TEST nvmf_invalid 00:15:08.202 ************************************ 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:08.202 * Looking for test storage... 00:15:08.202 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lcov --version 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:08.202 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:08.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.203 --rc genhtml_branch_coverage=1 00:15:08.203 --rc genhtml_function_coverage=1 00:15:08.203 --rc genhtml_legend=1 00:15:08.203 --rc geninfo_all_blocks=1 00:15:08.203 --rc geninfo_unexecuted_blocks=1 00:15:08.203 00:15:08.203 ' 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:08.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.203 --rc genhtml_branch_coverage=1 00:15:08.203 --rc genhtml_function_coverage=1 00:15:08.203 --rc genhtml_legend=1 00:15:08.203 --rc geninfo_all_blocks=1 00:15:08.203 --rc geninfo_unexecuted_blocks=1 00:15:08.203 00:15:08.203 ' 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:08.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.203 --rc genhtml_branch_coverage=1 00:15:08.203 --rc genhtml_function_coverage=1 00:15:08.203 --rc genhtml_legend=1 00:15:08.203 --rc geninfo_all_blocks=1 00:15:08.203 --rc geninfo_unexecuted_blocks=1 00:15:08.203 00:15:08.203 ' 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:08.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:08.203 --rc genhtml_branch_coverage=1 00:15:08.203 --rc genhtml_function_coverage=1 00:15:08.203 --rc genhtml_legend=1 00:15:08.203 --rc geninfo_all_blocks=1 00:15:08.203 --rc geninfo_unexecuted_blocks=1 00:15:08.203 00:15:08.203 ' 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:08.203 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:08.203 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:08.204 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:08.204 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:08.204 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:08.204 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:08.204 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:15:08.204 06:25:39 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:10.109 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:10.110 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:10.110 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:10.110 Found net devices under 0000:09:00.0: cvl_0_0 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:10.110 Found net devices under 0000:09:00.1: cvl_0_1 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:10.110 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:10.369 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:10.369 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.316 ms 00:15:10.369 00:15:10.369 --- 10.0.0.2 ping statistics --- 00:15:10.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.369 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:10.369 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:10.369 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:15:10.369 00:15:10.369 --- 10.0.0.1 ping statistics --- 00:15:10.369 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:10.369 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:10.369 06:25:41 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=2052200 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 2052200 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # '[' -z 2052200 ']' 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:10.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:10.369 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:10.369 [2024-11-20 06:25:42.064593] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:15:10.369 [2024-11-20 06:25:42.064702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:10.369 [2024-11-20 06:25:42.138527] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:10.369 [2024-11-20 06:25:42.198235] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:10.369 [2024-11-20 06:25:42.198287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:10.369 [2024-11-20 06:25:42.198325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:10.369 [2024-11-20 06:25:42.198337] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:10.369 [2024-11-20 06:25:42.198347] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:10.369 [2024-11-20 06:25:42.200006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:10.369 [2024-11-20 06:25:42.200081] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:10.369 [2024-11-20 06:25:42.200084] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.369 [2024-11-20 06:25:42.200050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:10.628 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:10.628 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@866 -- # return 0 00:15:10.628 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:10.628 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:10.628 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:10.628 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:10.628 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:10.628 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode28300 00:15:10.886 [2024-11-20 06:25:42.598999] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:10.886 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:10.886 { 00:15:10.886 "nqn": "nqn.2016-06.io.spdk:cnode28300", 00:15:10.886 "tgt_name": "foobar", 00:15:10.886 "method": "nvmf_create_subsystem", 00:15:10.886 "req_id": 1 00:15:10.886 } 00:15:10.886 Got JSON-RPC error response 00:15:10.886 response: 00:15:10.886 { 00:15:10.886 "code": -32603, 00:15:10.886 "message": "Unable to find target foobar" 00:15:10.886 }' 00:15:10.886 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:10.886 { 00:15:10.886 "nqn": "nqn.2016-06.io.spdk:cnode28300", 00:15:10.886 "tgt_name": "foobar", 00:15:10.886 "method": "nvmf_create_subsystem", 00:15:10.886 "req_id": 1 00:15:10.886 } 00:15:10.886 Got JSON-RPC error response 00:15:10.886 response: 00:15:10.886 { 00:15:10.886 "code": -32603, 00:15:10.886 "message": "Unable to find target foobar" 00:15:10.886 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:10.886 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:10.886 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode20808 00:15:11.143 [2024-11-20 06:25:42.879984] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode20808: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:11.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:11.144 { 00:15:11.144 "nqn": "nqn.2016-06.io.spdk:cnode20808", 00:15:11.144 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:11.144 "method": "nvmf_create_subsystem", 00:15:11.144 "req_id": 1 00:15:11.144 } 00:15:11.144 Got JSON-RPC error response 00:15:11.144 response: 00:15:11.144 { 00:15:11.144 "code": -32602, 00:15:11.144 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:11.144 }' 00:15:11.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:11.144 { 00:15:11.144 "nqn": "nqn.2016-06.io.spdk:cnode20808", 00:15:11.144 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:11.144 "method": "nvmf_create_subsystem", 00:15:11.144 "req_id": 1 00:15:11.144 } 00:15:11.144 Got JSON-RPC error response 00:15:11.144 response: 00:15:11.144 { 00:15:11.144 "code": -32602, 00:15:11.144 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:11.144 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:11.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:11.144 06:25:42 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4353 00:15:11.409 [2024-11-20 06:25:43.144868] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4353: invalid model number 'SPDK_Controller' 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:11.409 { 00:15:11.409 "nqn": "nqn.2016-06.io.spdk:cnode4353", 00:15:11.409 "model_number": "SPDK_Controller\u001f", 00:15:11.409 "method": "nvmf_create_subsystem", 00:15:11.409 "req_id": 1 00:15:11.409 } 00:15:11.409 Got JSON-RPC error response 00:15:11.409 response: 00:15:11.409 { 00:15:11.409 "code": -32602, 00:15:11.409 "message": "Invalid MN SPDK_Controller\u001f" 00:15:11.409 }' 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:11.409 { 00:15:11.409 "nqn": "nqn.2016-06.io.spdk:cnode4353", 00:15:11.409 "model_number": "SPDK_Controller\u001f", 00:15:11.409 "method": "nvmf_create_subsystem", 00:15:11.409 "req_id": 1 00:15:11.409 } 00:15:11.409 Got JSON-RPC error response 00:15:11.409 response: 00:15:11.409 { 00:15:11.409 "code": -32602, 00:15:11.409 "message": "Invalid MN SPDK_Controller\u001f" 00:15:11.409 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.409 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.410 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ e == \- ]] 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'ekg$zS{=5VVmQ/+a(Ybb?' 00:15:11.724 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'ekg$zS{=5VVmQ/+a(Ybb?' nqn.2016-06.io.spdk:cnode25828 00:15:11.724 [2024-11-20 06:25:43.518160] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25828: invalid serial number 'ekg$zS{=5VVmQ/+a(Ybb?' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:12.010 { 00:15:12.010 "nqn": "nqn.2016-06.io.spdk:cnode25828", 00:15:12.010 "serial_number": "ekg$zS{=5VVmQ/+a(Ybb?", 00:15:12.010 "method": "nvmf_create_subsystem", 00:15:12.010 "req_id": 1 00:15:12.010 } 00:15:12.010 Got JSON-RPC error response 00:15:12.010 response: 00:15:12.010 { 00:15:12.010 "code": -32602, 00:15:12.010 "message": "Invalid SN ekg$zS{=5VVmQ/+a(Ybb?" 00:15:12.010 }' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:12.010 { 00:15:12.010 "nqn": "nqn.2016-06.io.spdk:cnode25828", 00:15:12.010 "serial_number": "ekg$zS{=5VVmQ/+a(Ybb?", 00:15:12.010 "method": "nvmf_create_subsystem", 00:15:12.010 "req_id": 1 00:15:12.010 } 00:15:12.010 Got JSON-RPC error response 00:15:12.010 response: 00:15:12.010 { 00:15:12.010 "code": -32602, 00:15:12.010 "message": "Invalid SN ekg$zS{=5VVmQ/+a(Ybb?" 00:15:12.010 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.010 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.011 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:15:12.012 06:25:43 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\^j4T(>Zc>PEC!Ia azRP<\I`%%B0pi51>s.AIoZc>PEC!Ia azRP<\I`%%B0pi51>s.AIoZc>PEC!Ia azRP<\I`%%B0pi51>s.AIoZc>PEC!Ia azRP<\\I`%%B0pi51>s.AIoZc>PEC!Ia azRP<\\I`%%B0pi51>s.AIo /dev/null' 00:15:14.849 06:25:46 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:17.384 00:15:17.384 real 0m9.099s 00:15:17.384 user 0m21.478s 00:15:17.384 sys 0m2.639s 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:17.384 ************************************ 00:15:17.384 END TEST nvmf_invalid 00:15:17.384 ************************************ 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:17.384 ************************************ 00:15:17.384 START TEST nvmf_connect_stress 00:15:17.384 ************************************ 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:17.384 * Looking for test storage... 00:15:17.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.384 --rc genhtml_branch_coverage=1 00:15:17.384 --rc genhtml_function_coverage=1 00:15:17.384 --rc genhtml_legend=1 00:15:17.384 --rc geninfo_all_blocks=1 00:15:17.384 --rc geninfo_unexecuted_blocks=1 00:15:17.384 00:15:17.384 ' 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.384 --rc genhtml_branch_coverage=1 00:15:17.384 --rc genhtml_function_coverage=1 00:15:17.384 --rc genhtml_legend=1 00:15:17.384 --rc geninfo_all_blocks=1 00:15:17.384 --rc geninfo_unexecuted_blocks=1 00:15:17.384 00:15:17.384 ' 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.384 --rc genhtml_branch_coverage=1 00:15:17.384 --rc genhtml_function_coverage=1 00:15:17.384 --rc genhtml_legend=1 00:15:17.384 --rc geninfo_all_blocks=1 00:15:17.384 --rc geninfo_unexecuted_blocks=1 00:15:17.384 00:15:17.384 ' 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:17.384 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:17.384 --rc genhtml_branch_coverage=1 00:15:17.384 --rc genhtml_function_coverage=1 00:15:17.384 --rc genhtml_legend=1 00:15:17.384 --rc geninfo_all_blocks=1 00:15:17.384 --rc geninfo_unexecuted_blocks=1 00:15:17.384 00:15:17.384 ' 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:17.384 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:17.385 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:15:17.385 06:25:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:19.286 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.286 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:19.287 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:19.287 Found net devices under 0000:09:00.0: cvl_0_0 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:19.287 Found net devices under 0000:09:00.1: cvl_0_1 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:19.287 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:19.546 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:19.546 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.230 ms 00:15:19.546 00:15:19.546 --- 10.0.0.2 ping statistics --- 00:15:19.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.546 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:19.546 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:19.546 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.083 ms 00:15:19.546 00:15:19.546 --- 10.0.0.1 ping statistics --- 00:15:19.546 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:19.546 rtt min/avg/max/mdev = 0.083/0.083/0.083/0.000 ms 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=2054885 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 2054885 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # '[' -z 2054885 ']' 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:19.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:19.546 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.546 [2024-11-20 06:25:51.286813] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:15:19.546 [2024-11-20 06:25:51.286898] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:19.546 [2024-11-20 06:25:51.358280] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:19.805 [2024-11-20 06:25:51.414246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:19.805 [2024-11-20 06:25:51.414294] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:19.805 [2024-11-20 06:25:51.414330] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:19.805 [2024-11-20 06:25:51.414342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:19.805 [2024-11-20 06:25:51.414351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:19.805 [2024-11-20 06:25:51.415818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:19.805 [2024-11-20 06:25:51.415936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:19.805 [2024-11-20 06:25:51.415942] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@866 -- # return 0 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.805 [2024-11-20 06:25:51.556499] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.805 [2024-11-20 06:25:51.574141] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:19.805 NULL1 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2054988 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.805 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:19.806 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:19.806 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:19.806 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:19.806 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.806 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.372 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.372 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:20.372 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.372 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.372 06:25:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.630 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.630 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:20.630 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.630 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.630 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:20.888 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.888 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:20.888 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:20.888 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.888 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.145 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.145 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:21.145 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.145 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.145 06:25:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.709 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.709 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:21.709 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.709 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.709 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:21.966 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.966 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:21.966 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:21.966 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.966 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.224 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.224 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:22.224 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.224 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.224 06:25:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.482 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.482 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:22.482 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.482 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.482 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:22.739 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.739 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:22.739 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:22.739 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.739 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.304 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.304 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:23.304 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.304 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.304 06:25:54 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.562 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.562 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:23.562 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.562 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.562 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:23.821 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:23.821 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:23.821 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:23.821 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:23.821 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.078 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.078 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:24.078 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.078 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.078 06:25:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.336 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.336 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:24.336 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.336 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.336 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:24.901 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.901 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:24.901 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:24.901 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.901 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.158 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.158 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:25.158 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.158 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.158 06:25:56 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.414 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.414 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:25.414 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.414 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.414 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.672 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.672 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:25.672 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.672 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.672 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:25.929 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.929 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:25.929 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:25.929 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.929 06:25:57 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.494 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.494 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:26.494 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.494 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.494 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.752 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.752 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:26.752 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.752 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.752 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.010 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.010 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:27.010 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.010 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.010 06:25:58 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.268 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.268 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:27.268 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.268 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.268 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.526 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.526 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:27.526 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.526 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.526 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.092 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.092 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:28.092 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.092 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.092 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.350 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.350 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:28.350 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.350 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.350 06:25:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.608 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.608 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:28.608 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.608 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.608 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.866 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:28.866 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:28.866 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.866 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:28.866 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.123 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.123 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:29.123 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.123 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.123 06:26:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.688 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.688 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:29.688 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.688 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.688 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.946 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.947 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:29.947 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.947 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.947 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.947 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2054988 00:15:30.204 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2054988) - No such process 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2054988 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:30.204 rmmod nvme_tcp 00:15:30.204 rmmod nvme_fabrics 00:15:30.204 rmmod nvme_keyring 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 2054885 ']' 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 2054885 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # '[' -z 2054885 ']' 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # kill -0 2054885 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # uname 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:30.204 06:26:01 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2054885 00:15:30.204 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:30.204 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:30.204 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2054885' 00:15:30.204 killing process with pid 2054885 00:15:30.204 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@971 -- # kill 2054885 00:15:30.204 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@976 -- # wait 2054885 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:30.462 06:26:02 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.528 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:32.528 00:15:32.528 real 0m15.608s 00:15:32.528 user 0m38.545s 00:15:32.528 sys 0m6.053s 00:15:32.528 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:32.528 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.528 ************************************ 00:15:32.528 END TEST nvmf_connect_stress 00:15:32.528 ************************************ 00:15:32.528 06:26:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:32.528 06:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:32.528 06:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:32.528 06:26:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:32.528 ************************************ 00:15:32.528 START TEST nvmf_fused_ordering 00:15:32.528 ************************************ 00:15:32.528 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:32.789 * Looking for test storage... 00:15:32.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lcov --version 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:32.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.789 --rc genhtml_branch_coverage=1 00:15:32.789 --rc genhtml_function_coverage=1 00:15:32.789 --rc genhtml_legend=1 00:15:32.789 --rc geninfo_all_blocks=1 00:15:32.789 --rc geninfo_unexecuted_blocks=1 00:15:32.789 00:15:32.789 ' 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:32.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.789 --rc genhtml_branch_coverage=1 00:15:32.789 --rc genhtml_function_coverage=1 00:15:32.789 --rc genhtml_legend=1 00:15:32.789 --rc geninfo_all_blocks=1 00:15:32.789 --rc geninfo_unexecuted_blocks=1 00:15:32.789 00:15:32.789 ' 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:32.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.789 --rc genhtml_branch_coverage=1 00:15:32.789 --rc genhtml_function_coverage=1 00:15:32.789 --rc genhtml_legend=1 00:15:32.789 --rc geninfo_all_blocks=1 00:15:32.789 --rc geninfo_unexecuted_blocks=1 00:15:32.789 00:15:32.789 ' 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:32.789 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:32.789 --rc genhtml_branch_coverage=1 00:15:32.789 --rc genhtml_function_coverage=1 00:15:32.789 --rc genhtml_legend=1 00:15:32.789 --rc geninfo_all_blocks=1 00:15:32.789 --rc geninfo_unexecuted_blocks=1 00:15:32.789 00:15:32.789 ' 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:32.789 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:32.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:15:32.790 06:26:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.322 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:35.322 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:15:35.322 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:35.323 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:35.323 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:35.323 Found net devices under 0000:09:00.0: cvl_0_0 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:35.323 Found net devices under 0000:09:00.1: cvl_0_1 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:35.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:35.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:15:35.323 00:15:35.323 --- 10.0.0.2 ping statistics --- 00:15:35.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.323 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:15:35.323 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:35.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:35.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.147 ms 00:15:35.323 00:15:35.323 --- 10.0.0.1 ping statistics --- 00:15:35.324 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:35.324 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=2058143 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 2058143 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # '[' -z 2058143 ']' 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.324 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:35.324 06:26:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.324 [2024-11-20 06:26:06.879643] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:15:35.324 [2024-11-20 06:26:06.879738] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:35.324 [2024-11-20 06:26:06.955194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:35.324 [2024-11-20 06:26:07.014964] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:35.324 [2024-11-20 06:26:07.015009] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:35.324 [2024-11-20 06:26:07.015038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:35.324 [2024-11-20 06:26:07.015049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:35.324 [2024-11-20 06:26:07.015059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:35.324 [2024-11-20 06:26:07.015733] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:35.324 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:35.324 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@866 -- # return 0 00:15:35.324 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:35.324 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:35.324 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.582 [2024-11-20 06:26:07.167470] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.582 [2024-11-20 06:26:07.183704] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.582 NULL1 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.582 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:35.583 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.583 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:35.583 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:35.583 06:26:07 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:35.583 [2024-11-20 06:26:07.229685] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:15:35.583 [2024-11-20 06:26:07.229730] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2058249 ] 00:15:35.840 Attached to nqn.2016-06.io.spdk:cnode1 00:15:35.840 Namespace ID: 1 size: 1GB 00:15:35.840 fused_ordering(0) 00:15:35.840 fused_ordering(1) 00:15:35.840 fused_ordering(2) 00:15:35.840 fused_ordering(3) 00:15:35.840 fused_ordering(4) 00:15:35.840 fused_ordering(5) 00:15:35.840 fused_ordering(6) 00:15:35.840 fused_ordering(7) 00:15:35.840 fused_ordering(8) 00:15:35.840 fused_ordering(9) 00:15:35.840 fused_ordering(10) 00:15:35.840 fused_ordering(11) 00:15:35.841 fused_ordering(12) 00:15:35.841 fused_ordering(13) 00:15:35.841 fused_ordering(14) 00:15:35.841 fused_ordering(15) 00:15:35.841 fused_ordering(16) 00:15:35.841 fused_ordering(17) 00:15:35.841 fused_ordering(18) 00:15:35.841 fused_ordering(19) 00:15:35.841 fused_ordering(20) 00:15:35.841 fused_ordering(21) 00:15:35.841 fused_ordering(22) 00:15:35.841 fused_ordering(23) 00:15:35.841 fused_ordering(24) 00:15:35.841 fused_ordering(25) 00:15:35.841 fused_ordering(26) 00:15:35.841 fused_ordering(27) 00:15:35.841 fused_ordering(28) 00:15:35.841 fused_ordering(29) 00:15:35.841 fused_ordering(30) 00:15:35.841 fused_ordering(31) 00:15:35.841 fused_ordering(32) 00:15:35.841 fused_ordering(33) 00:15:35.841 fused_ordering(34) 00:15:35.841 fused_ordering(35) 00:15:35.841 fused_ordering(36) 00:15:35.841 fused_ordering(37) 00:15:35.841 fused_ordering(38) 00:15:35.841 fused_ordering(39) 00:15:35.841 fused_ordering(40) 00:15:35.841 fused_ordering(41) 00:15:35.841 fused_ordering(42) 00:15:35.841 fused_ordering(43) 00:15:35.841 fused_ordering(44) 00:15:35.841 fused_ordering(45) 00:15:35.841 fused_ordering(46) 00:15:35.841 fused_ordering(47) 00:15:35.841 fused_ordering(48) 00:15:35.841 fused_ordering(49) 00:15:35.841 fused_ordering(50) 00:15:35.841 fused_ordering(51) 00:15:35.841 fused_ordering(52) 00:15:35.841 fused_ordering(53) 00:15:35.841 fused_ordering(54) 00:15:35.841 fused_ordering(55) 00:15:35.841 fused_ordering(56) 00:15:35.841 fused_ordering(57) 00:15:35.841 fused_ordering(58) 00:15:35.841 fused_ordering(59) 00:15:35.841 fused_ordering(60) 00:15:35.841 fused_ordering(61) 00:15:35.841 fused_ordering(62) 00:15:35.841 fused_ordering(63) 00:15:35.841 fused_ordering(64) 00:15:35.841 fused_ordering(65) 00:15:35.841 fused_ordering(66) 00:15:35.841 fused_ordering(67) 00:15:35.841 fused_ordering(68) 00:15:35.841 fused_ordering(69) 00:15:35.841 fused_ordering(70) 00:15:35.841 fused_ordering(71) 00:15:35.841 fused_ordering(72) 00:15:35.841 fused_ordering(73) 00:15:35.841 fused_ordering(74) 00:15:35.841 fused_ordering(75) 00:15:35.841 fused_ordering(76) 00:15:35.841 fused_ordering(77) 00:15:35.841 fused_ordering(78) 00:15:35.841 fused_ordering(79) 00:15:35.841 fused_ordering(80) 00:15:35.841 fused_ordering(81) 00:15:35.841 fused_ordering(82) 00:15:35.841 fused_ordering(83) 00:15:35.841 fused_ordering(84) 00:15:35.841 fused_ordering(85) 00:15:35.841 fused_ordering(86) 00:15:35.841 fused_ordering(87) 00:15:35.841 fused_ordering(88) 00:15:35.841 fused_ordering(89) 00:15:35.841 fused_ordering(90) 00:15:35.841 fused_ordering(91) 00:15:35.841 fused_ordering(92) 00:15:35.841 fused_ordering(93) 00:15:35.841 fused_ordering(94) 00:15:35.841 fused_ordering(95) 00:15:35.841 fused_ordering(96) 00:15:35.841 fused_ordering(97) 00:15:35.841 fused_ordering(98) 00:15:35.841 fused_ordering(99) 00:15:35.841 fused_ordering(100) 00:15:35.841 fused_ordering(101) 00:15:35.841 fused_ordering(102) 00:15:35.841 fused_ordering(103) 00:15:35.841 fused_ordering(104) 00:15:35.841 fused_ordering(105) 00:15:35.841 fused_ordering(106) 00:15:35.841 fused_ordering(107) 00:15:35.841 fused_ordering(108) 00:15:35.841 fused_ordering(109) 00:15:35.841 fused_ordering(110) 00:15:35.841 fused_ordering(111) 00:15:35.841 fused_ordering(112) 00:15:35.841 fused_ordering(113) 00:15:35.841 fused_ordering(114) 00:15:35.841 fused_ordering(115) 00:15:35.841 fused_ordering(116) 00:15:35.841 fused_ordering(117) 00:15:35.841 fused_ordering(118) 00:15:35.841 fused_ordering(119) 00:15:35.841 fused_ordering(120) 00:15:35.841 fused_ordering(121) 00:15:35.841 fused_ordering(122) 00:15:35.841 fused_ordering(123) 00:15:35.841 fused_ordering(124) 00:15:35.841 fused_ordering(125) 00:15:35.841 fused_ordering(126) 00:15:35.841 fused_ordering(127) 00:15:35.841 fused_ordering(128) 00:15:35.841 fused_ordering(129) 00:15:35.841 fused_ordering(130) 00:15:35.841 fused_ordering(131) 00:15:35.841 fused_ordering(132) 00:15:35.841 fused_ordering(133) 00:15:35.841 fused_ordering(134) 00:15:35.841 fused_ordering(135) 00:15:35.841 fused_ordering(136) 00:15:35.841 fused_ordering(137) 00:15:35.841 fused_ordering(138) 00:15:35.841 fused_ordering(139) 00:15:35.841 fused_ordering(140) 00:15:35.841 fused_ordering(141) 00:15:35.841 fused_ordering(142) 00:15:35.841 fused_ordering(143) 00:15:35.841 fused_ordering(144) 00:15:35.841 fused_ordering(145) 00:15:35.841 fused_ordering(146) 00:15:35.841 fused_ordering(147) 00:15:35.841 fused_ordering(148) 00:15:35.841 fused_ordering(149) 00:15:35.841 fused_ordering(150) 00:15:35.841 fused_ordering(151) 00:15:35.841 fused_ordering(152) 00:15:35.841 fused_ordering(153) 00:15:35.841 fused_ordering(154) 00:15:35.841 fused_ordering(155) 00:15:35.841 fused_ordering(156) 00:15:35.841 fused_ordering(157) 00:15:35.841 fused_ordering(158) 00:15:35.841 fused_ordering(159) 00:15:35.841 fused_ordering(160) 00:15:35.841 fused_ordering(161) 00:15:35.841 fused_ordering(162) 00:15:35.841 fused_ordering(163) 00:15:35.841 fused_ordering(164) 00:15:35.841 fused_ordering(165) 00:15:35.841 fused_ordering(166) 00:15:35.841 fused_ordering(167) 00:15:35.841 fused_ordering(168) 00:15:35.841 fused_ordering(169) 00:15:35.841 fused_ordering(170) 00:15:35.841 fused_ordering(171) 00:15:35.841 fused_ordering(172) 00:15:35.841 fused_ordering(173) 00:15:35.841 fused_ordering(174) 00:15:35.841 fused_ordering(175) 00:15:35.841 fused_ordering(176) 00:15:35.841 fused_ordering(177) 00:15:35.841 fused_ordering(178) 00:15:35.841 fused_ordering(179) 00:15:35.841 fused_ordering(180) 00:15:35.841 fused_ordering(181) 00:15:35.841 fused_ordering(182) 00:15:35.841 fused_ordering(183) 00:15:35.841 fused_ordering(184) 00:15:35.841 fused_ordering(185) 00:15:35.841 fused_ordering(186) 00:15:35.841 fused_ordering(187) 00:15:35.841 fused_ordering(188) 00:15:35.841 fused_ordering(189) 00:15:35.841 fused_ordering(190) 00:15:35.841 fused_ordering(191) 00:15:35.841 fused_ordering(192) 00:15:35.841 fused_ordering(193) 00:15:35.841 fused_ordering(194) 00:15:35.841 fused_ordering(195) 00:15:35.841 fused_ordering(196) 00:15:35.841 fused_ordering(197) 00:15:35.841 fused_ordering(198) 00:15:35.841 fused_ordering(199) 00:15:35.841 fused_ordering(200) 00:15:35.841 fused_ordering(201) 00:15:35.841 fused_ordering(202) 00:15:35.841 fused_ordering(203) 00:15:35.841 fused_ordering(204) 00:15:35.841 fused_ordering(205) 00:15:36.099 fused_ordering(206) 00:15:36.099 fused_ordering(207) 00:15:36.099 fused_ordering(208) 00:15:36.099 fused_ordering(209) 00:15:36.099 fused_ordering(210) 00:15:36.099 fused_ordering(211) 00:15:36.099 fused_ordering(212) 00:15:36.099 fused_ordering(213) 00:15:36.099 fused_ordering(214) 00:15:36.099 fused_ordering(215) 00:15:36.099 fused_ordering(216) 00:15:36.099 fused_ordering(217) 00:15:36.099 fused_ordering(218) 00:15:36.099 fused_ordering(219) 00:15:36.099 fused_ordering(220) 00:15:36.099 fused_ordering(221) 00:15:36.099 fused_ordering(222) 00:15:36.099 fused_ordering(223) 00:15:36.099 fused_ordering(224) 00:15:36.099 fused_ordering(225) 00:15:36.099 fused_ordering(226) 00:15:36.099 fused_ordering(227) 00:15:36.099 fused_ordering(228) 00:15:36.099 fused_ordering(229) 00:15:36.099 fused_ordering(230) 00:15:36.099 fused_ordering(231) 00:15:36.099 fused_ordering(232) 00:15:36.099 fused_ordering(233) 00:15:36.099 fused_ordering(234) 00:15:36.099 fused_ordering(235) 00:15:36.099 fused_ordering(236) 00:15:36.099 fused_ordering(237) 00:15:36.099 fused_ordering(238) 00:15:36.099 fused_ordering(239) 00:15:36.099 fused_ordering(240) 00:15:36.099 fused_ordering(241) 00:15:36.099 fused_ordering(242) 00:15:36.099 fused_ordering(243) 00:15:36.099 fused_ordering(244) 00:15:36.099 fused_ordering(245) 00:15:36.099 fused_ordering(246) 00:15:36.099 fused_ordering(247) 00:15:36.099 fused_ordering(248) 00:15:36.099 fused_ordering(249) 00:15:36.099 fused_ordering(250) 00:15:36.099 fused_ordering(251) 00:15:36.099 fused_ordering(252) 00:15:36.099 fused_ordering(253) 00:15:36.099 fused_ordering(254) 00:15:36.099 fused_ordering(255) 00:15:36.099 fused_ordering(256) 00:15:36.099 fused_ordering(257) 00:15:36.099 fused_ordering(258) 00:15:36.099 fused_ordering(259) 00:15:36.099 fused_ordering(260) 00:15:36.099 fused_ordering(261) 00:15:36.099 fused_ordering(262) 00:15:36.099 fused_ordering(263) 00:15:36.099 fused_ordering(264) 00:15:36.099 fused_ordering(265) 00:15:36.099 fused_ordering(266) 00:15:36.099 fused_ordering(267) 00:15:36.099 fused_ordering(268) 00:15:36.099 fused_ordering(269) 00:15:36.099 fused_ordering(270) 00:15:36.099 fused_ordering(271) 00:15:36.099 fused_ordering(272) 00:15:36.099 fused_ordering(273) 00:15:36.099 fused_ordering(274) 00:15:36.099 fused_ordering(275) 00:15:36.099 fused_ordering(276) 00:15:36.099 fused_ordering(277) 00:15:36.099 fused_ordering(278) 00:15:36.099 fused_ordering(279) 00:15:36.099 fused_ordering(280) 00:15:36.099 fused_ordering(281) 00:15:36.099 fused_ordering(282) 00:15:36.099 fused_ordering(283) 00:15:36.099 fused_ordering(284) 00:15:36.099 fused_ordering(285) 00:15:36.099 fused_ordering(286) 00:15:36.099 fused_ordering(287) 00:15:36.100 fused_ordering(288) 00:15:36.100 fused_ordering(289) 00:15:36.100 fused_ordering(290) 00:15:36.100 fused_ordering(291) 00:15:36.100 fused_ordering(292) 00:15:36.100 fused_ordering(293) 00:15:36.100 fused_ordering(294) 00:15:36.100 fused_ordering(295) 00:15:36.100 fused_ordering(296) 00:15:36.100 fused_ordering(297) 00:15:36.100 fused_ordering(298) 00:15:36.100 fused_ordering(299) 00:15:36.100 fused_ordering(300) 00:15:36.100 fused_ordering(301) 00:15:36.100 fused_ordering(302) 00:15:36.100 fused_ordering(303) 00:15:36.100 fused_ordering(304) 00:15:36.100 fused_ordering(305) 00:15:36.100 fused_ordering(306) 00:15:36.100 fused_ordering(307) 00:15:36.100 fused_ordering(308) 00:15:36.100 fused_ordering(309) 00:15:36.100 fused_ordering(310) 00:15:36.100 fused_ordering(311) 00:15:36.100 fused_ordering(312) 00:15:36.100 fused_ordering(313) 00:15:36.100 fused_ordering(314) 00:15:36.100 fused_ordering(315) 00:15:36.100 fused_ordering(316) 00:15:36.100 fused_ordering(317) 00:15:36.100 fused_ordering(318) 00:15:36.100 fused_ordering(319) 00:15:36.100 fused_ordering(320) 00:15:36.100 fused_ordering(321) 00:15:36.100 fused_ordering(322) 00:15:36.100 fused_ordering(323) 00:15:36.100 fused_ordering(324) 00:15:36.100 fused_ordering(325) 00:15:36.100 fused_ordering(326) 00:15:36.100 fused_ordering(327) 00:15:36.100 fused_ordering(328) 00:15:36.100 fused_ordering(329) 00:15:36.100 fused_ordering(330) 00:15:36.100 fused_ordering(331) 00:15:36.100 fused_ordering(332) 00:15:36.100 fused_ordering(333) 00:15:36.100 fused_ordering(334) 00:15:36.100 fused_ordering(335) 00:15:36.100 fused_ordering(336) 00:15:36.100 fused_ordering(337) 00:15:36.100 fused_ordering(338) 00:15:36.100 fused_ordering(339) 00:15:36.100 fused_ordering(340) 00:15:36.100 fused_ordering(341) 00:15:36.100 fused_ordering(342) 00:15:36.100 fused_ordering(343) 00:15:36.100 fused_ordering(344) 00:15:36.100 fused_ordering(345) 00:15:36.100 fused_ordering(346) 00:15:36.100 fused_ordering(347) 00:15:36.100 fused_ordering(348) 00:15:36.100 fused_ordering(349) 00:15:36.100 fused_ordering(350) 00:15:36.100 fused_ordering(351) 00:15:36.100 fused_ordering(352) 00:15:36.100 fused_ordering(353) 00:15:36.100 fused_ordering(354) 00:15:36.100 fused_ordering(355) 00:15:36.100 fused_ordering(356) 00:15:36.100 fused_ordering(357) 00:15:36.100 fused_ordering(358) 00:15:36.100 fused_ordering(359) 00:15:36.100 fused_ordering(360) 00:15:36.100 fused_ordering(361) 00:15:36.100 fused_ordering(362) 00:15:36.100 fused_ordering(363) 00:15:36.100 fused_ordering(364) 00:15:36.100 fused_ordering(365) 00:15:36.100 fused_ordering(366) 00:15:36.100 fused_ordering(367) 00:15:36.100 fused_ordering(368) 00:15:36.100 fused_ordering(369) 00:15:36.100 fused_ordering(370) 00:15:36.100 fused_ordering(371) 00:15:36.100 fused_ordering(372) 00:15:36.100 fused_ordering(373) 00:15:36.100 fused_ordering(374) 00:15:36.100 fused_ordering(375) 00:15:36.100 fused_ordering(376) 00:15:36.100 fused_ordering(377) 00:15:36.100 fused_ordering(378) 00:15:36.100 fused_ordering(379) 00:15:36.100 fused_ordering(380) 00:15:36.100 fused_ordering(381) 00:15:36.100 fused_ordering(382) 00:15:36.100 fused_ordering(383) 00:15:36.100 fused_ordering(384) 00:15:36.100 fused_ordering(385) 00:15:36.100 fused_ordering(386) 00:15:36.100 fused_ordering(387) 00:15:36.100 fused_ordering(388) 00:15:36.100 fused_ordering(389) 00:15:36.100 fused_ordering(390) 00:15:36.100 fused_ordering(391) 00:15:36.100 fused_ordering(392) 00:15:36.100 fused_ordering(393) 00:15:36.100 fused_ordering(394) 00:15:36.100 fused_ordering(395) 00:15:36.100 fused_ordering(396) 00:15:36.100 fused_ordering(397) 00:15:36.100 fused_ordering(398) 00:15:36.100 fused_ordering(399) 00:15:36.100 fused_ordering(400) 00:15:36.100 fused_ordering(401) 00:15:36.100 fused_ordering(402) 00:15:36.100 fused_ordering(403) 00:15:36.100 fused_ordering(404) 00:15:36.100 fused_ordering(405) 00:15:36.100 fused_ordering(406) 00:15:36.100 fused_ordering(407) 00:15:36.100 fused_ordering(408) 00:15:36.100 fused_ordering(409) 00:15:36.100 fused_ordering(410) 00:15:36.666 fused_ordering(411) 00:15:36.666 fused_ordering(412) 00:15:36.666 fused_ordering(413) 00:15:36.666 fused_ordering(414) 00:15:36.666 fused_ordering(415) 00:15:36.666 fused_ordering(416) 00:15:36.666 fused_ordering(417) 00:15:36.666 fused_ordering(418) 00:15:36.666 fused_ordering(419) 00:15:36.666 fused_ordering(420) 00:15:36.666 fused_ordering(421) 00:15:36.666 fused_ordering(422) 00:15:36.666 fused_ordering(423) 00:15:36.666 fused_ordering(424) 00:15:36.666 fused_ordering(425) 00:15:36.666 fused_ordering(426) 00:15:36.666 fused_ordering(427) 00:15:36.666 fused_ordering(428) 00:15:36.666 fused_ordering(429) 00:15:36.666 fused_ordering(430) 00:15:36.666 fused_ordering(431) 00:15:36.666 fused_ordering(432) 00:15:36.666 fused_ordering(433) 00:15:36.666 fused_ordering(434) 00:15:36.666 fused_ordering(435) 00:15:36.666 fused_ordering(436) 00:15:36.666 fused_ordering(437) 00:15:36.666 fused_ordering(438) 00:15:36.666 fused_ordering(439) 00:15:36.666 fused_ordering(440) 00:15:36.666 fused_ordering(441) 00:15:36.666 fused_ordering(442) 00:15:36.666 fused_ordering(443) 00:15:36.666 fused_ordering(444) 00:15:36.666 fused_ordering(445) 00:15:36.666 fused_ordering(446) 00:15:36.666 fused_ordering(447) 00:15:36.666 fused_ordering(448) 00:15:36.666 fused_ordering(449) 00:15:36.666 fused_ordering(450) 00:15:36.666 fused_ordering(451) 00:15:36.666 fused_ordering(452) 00:15:36.666 fused_ordering(453) 00:15:36.666 fused_ordering(454) 00:15:36.666 fused_ordering(455) 00:15:36.666 fused_ordering(456) 00:15:36.666 fused_ordering(457) 00:15:36.666 fused_ordering(458) 00:15:36.666 fused_ordering(459) 00:15:36.666 fused_ordering(460) 00:15:36.666 fused_ordering(461) 00:15:36.666 fused_ordering(462) 00:15:36.666 fused_ordering(463) 00:15:36.666 fused_ordering(464) 00:15:36.666 fused_ordering(465) 00:15:36.666 fused_ordering(466) 00:15:36.666 fused_ordering(467) 00:15:36.666 fused_ordering(468) 00:15:36.666 fused_ordering(469) 00:15:36.666 fused_ordering(470) 00:15:36.666 fused_ordering(471) 00:15:36.666 fused_ordering(472) 00:15:36.666 fused_ordering(473) 00:15:36.666 fused_ordering(474) 00:15:36.666 fused_ordering(475) 00:15:36.666 fused_ordering(476) 00:15:36.666 fused_ordering(477) 00:15:36.666 fused_ordering(478) 00:15:36.666 fused_ordering(479) 00:15:36.666 fused_ordering(480) 00:15:36.666 fused_ordering(481) 00:15:36.666 fused_ordering(482) 00:15:36.666 fused_ordering(483) 00:15:36.666 fused_ordering(484) 00:15:36.666 fused_ordering(485) 00:15:36.666 fused_ordering(486) 00:15:36.666 fused_ordering(487) 00:15:36.666 fused_ordering(488) 00:15:36.666 fused_ordering(489) 00:15:36.666 fused_ordering(490) 00:15:36.666 fused_ordering(491) 00:15:36.666 fused_ordering(492) 00:15:36.666 fused_ordering(493) 00:15:36.666 fused_ordering(494) 00:15:36.666 fused_ordering(495) 00:15:36.666 fused_ordering(496) 00:15:36.666 fused_ordering(497) 00:15:36.666 fused_ordering(498) 00:15:36.666 fused_ordering(499) 00:15:36.666 fused_ordering(500) 00:15:36.666 fused_ordering(501) 00:15:36.666 fused_ordering(502) 00:15:36.666 fused_ordering(503) 00:15:36.666 fused_ordering(504) 00:15:36.666 fused_ordering(505) 00:15:36.666 fused_ordering(506) 00:15:36.666 fused_ordering(507) 00:15:36.666 fused_ordering(508) 00:15:36.666 fused_ordering(509) 00:15:36.666 fused_ordering(510) 00:15:36.666 fused_ordering(511) 00:15:36.666 fused_ordering(512) 00:15:36.666 fused_ordering(513) 00:15:36.666 fused_ordering(514) 00:15:36.666 fused_ordering(515) 00:15:36.666 fused_ordering(516) 00:15:36.666 fused_ordering(517) 00:15:36.666 fused_ordering(518) 00:15:36.666 fused_ordering(519) 00:15:36.666 fused_ordering(520) 00:15:36.666 fused_ordering(521) 00:15:36.666 fused_ordering(522) 00:15:36.666 fused_ordering(523) 00:15:36.666 fused_ordering(524) 00:15:36.666 fused_ordering(525) 00:15:36.666 fused_ordering(526) 00:15:36.666 fused_ordering(527) 00:15:36.666 fused_ordering(528) 00:15:36.666 fused_ordering(529) 00:15:36.666 fused_ordering(530) 00:15:36.666 fused_ordering(531) 00:15:36.666 fused_ordering(532) 00:15:36.666 fused_ordering(533) 00:15:36.666 fused_ordering(534) 00:15:36.666 fused_ordering(535) 00:15:36.666 fused_ordering(536) 00:15:36.666 fused_ordering(537) 00:15:36.666 fused_ordering(538) 00:15:36.666 fused_ordering(539) 00:15:36.666 fused_ordering(540) 00:15:36.666 fused_ordering(541) 00:15:36.666 fused_ordering(542) 00:15:36.666 fused_ordering(543) 00:15:36.666 fused_ordering(544) 00:15:36.666 fused_ordering(545) 00:15:36.666 fused_ordering(546) 00:15:36.666 fused_ordering(547) 00:15:36.666 fused_ordering(548) 00:15:36.666 fused_ordering(549) 00:15:36.666 fused_ordering(550) 00:15:36.666 fused_ordering(551) 00:15:36.666 fused_ordering(552) 00:15:36.666 fused_ordering(553) 00:15:36.666 fused_ordering(554) 00:15:36.666 fused_ordering(555) 00:15:36.666 fused_ordering(556) 00:15:36.666 fused_ordering(557) 00:15:36.666 fused_ordering(558) 00:15:36.666 fused_ordering(559) 00:15:36.666 fused_ordering(560) 00:15:36.666 fused_ordering(561) 00:15:36.666 fused_ordering(562) 00:15:36.666 fused_ordering(563) 00:15:36.666 fused_ordering(564) 00:15:36.666 fused_ordering(565) 00:15:36.666 fused_ordering(566) 00:15:36.666 fused_ordering(567) 00:15:36.666 fused_ordering(568) 00:15:36.666 fused_ordering(569) 00:15:36.666 fused_ordering(570) 00:15:36.666 fused_ordering(571) 00:15:36.666 fused_ordering(572) 00:15:36.666 fused_ordering(573) 00:15:36.666 fused_ordering(574) 00:15:36.666 fused_ordering(575) 00:15:36.666 fused_ordering(576) 00:15:36.666 fused_ordering(577) 00:15:36.666 fused_ordering(578) 00:15:36.666 fused_ordering(579) 00:15:36.666 fused_ordering(580) 00:15:36.666 fused_ordering(581) 00:15:36.666 fused_ordering(582) 00:15:36.666 fused_ordering(583) 00:15:36.666 fused_ordering(584) 00:15:36.666 fused_ordering(585) 00:15:36.666 fused_ordering(586) 00:15:36.666 fused_ordering(587) 00:15:36.666 fused_ordering(588) 00:15:36.666 fused_ordering(589) 00:15:36.666 fused_ordering(590) 00:15:36.666 fused_ordering(591) 00:15:36.666 fused_ordering(592) 00:15:36.666 fused_ordering(593) 00:15:36.666 fused_ordering(594) 00:15:36.666 fused_ordering(595) 00:15:36.666 fused_ordering(596) 00:15:36.666 fused_ordering(597) 00:15:36.666 fused_ordering(598) 00:15:36.666 fused_ordering(599) 00:15:36.666 fused_ordering(600) 00:15:36.666 fused_ordering(601) 00:15:36.666 fused_ordering(602) 00:15:36.666 fused_ordering(603) 00:15:36.666 fused_ordering(604) 00:15:36.666 fused_ordering(605) 00:15:36.666 fused_ordering(606) 00:15:36.666 fused_ordering(607) 00:15:36.666 fused_ordering(608) 00:15:36.666 fused_ordering(609) 00:15:36.666 fused_ordering(610) 00:15:36.666 fused_ordering(611) 00:15:36.666 fused_ordering(612) 00:15:36.667 fused_ordering(613) 00:15:36.667 fused_ordering(614) 00:15:36.667 fused_ordering(615) 00:15:37.232 fused_ordering(616) 00:15:37.232 fused_ordering(617) 00:15:37.232 fused_ordering(618) 00:15:37.232 fused_ordering(619) 00:15:37.232 fused_ordering(620) 00:15:37.232 fused_ordering(621) 00:15:37.232 fused_ordering(622) 00:15:37.232 fused_ordering(623) 00:15:37.232 fused_ordering(624) 00:15:37.232 fused_ordering(625) 00:15:37.232 fused_ordering(626) 00:15:37.232 fused_ordering(627) 00:15:37.232 fused_ordering(628) 00:15:37.232 fused_ordering(629) 00:15:37.232 fused_ordering(630) 00:15:37.232 fused_ordering(631) 00:15:37.232 fused_ordering(632) 00:15:37.232 fused_ordering(633) 00:15:37.232 fused_ordering(634) 00:15:37.232 fused_ordering(635) 00:15:37.232 fused_ordering(636) 00:15:37.232 fused_ordering(637) 00:15:37.232 fused_ordering(638) 00:15:37.232 fused_ordering(639) 00:15:37.232 fused_ordering(640) 00:15:37.232 fused_ordering(641) 00:15:37.232 fused_ordering(642) 00:15:37.232 fused_ordering(643) 00:15:37.232 fused_ordering(644) 00:15:37.232 fused_ordering(645) 00:15:37.232 fused_ordering(646) 00:15:37.232 fused_ordering(647) 00:15:37.232 fused_ordering(648) 00:15:37.232 fused_ordering(649) 00:15:37.232 fused_ordering(650) 00:15:37.232 fused_ordering(651) 00:15:37.232 fused_ordering(652) 00:15:37.232 fused_ordering(653) 00:15:37.232 fused_ordering(654) 00:15:37.232 fused_ordering(655) 00:15:37.232 fused_ordering(656) 00:15:37.232 fused_ordering(657) 00:15:37.232 fused_ordering(658) 00:15:37.232 fused_ordering(659) 00:15:37.232 fused_ordering(660) 00:15:37.232 fused_ordering(661) 00:15:37.232 fused_ordering(662) 00:15:37.232 fused_ordering(663) 00:15:37.232 fused_ordering(664) 00:15:37.232 fused_ordering(665) 00:15:37.232 fused_ordering(666) 00:15:37.232 fused_ordering(667) 00:15:37.232 fused_ordering(668) 00:15:37.232 fused_ordering(669) 00:15:37.232 fused_ordering(670) 00:15:37.233 fused_ordering(671) 00:15:37.233 fused_ordering(672) 00:15:37.233 fused_ordering(673) 00:15:37.233 fused_ordering(674) 00:15:37.233 fused_ordering(675) 00:15:37.233 fused_ordering(676) 00:15:37.233 fused_ordering(677) 00:15:37.233 fused_ordering(678) 00:15:37.233 fused_ordering(679) 00:15:37.233 fused_ordering(680) 00:15:37.233 fused_ordering(681) 00:15:37.233 fused_ordering(682) 00:15:37.233 fused_ordering(683) 00:15:37.233 fused_ordering(684) 00:15:37.233 fused_ordering(685) 00:15:37.233 fused_ordering(686) 00:15:37.233 fused_ordering(687) 00:15:37.233 fused_ordering(688) 00:15:37.233 fused_ordering(689) 00:15:37.233 fused_ordering(690) 00:15:37.233 fused_ordering(691) 00:15:37.233 fused_ordering(692) 00:15:37.233 fused_ordering(693) 00:15:37.233 fused_ordering(694) 00:15:37.233 fused_ordering(695) 00:15:37.233 fused_ordering(696) 00:15:37.233 fused_ordering(697) 00:15:37.233 fused_ordering(698) 00:15:37.233 fused_ordering(699) 00:15:37.233 fused_ordering(700) 00:15:37.233 fused_ordering(701) 00:15:37.233 fused_ordering(702) 00:15:37.233 fused_ordering(703) 00:15:37.233 fused_ordering(704) 00:15:37.233 fused_ordering(705) 00:15:37.233 fused_ordering(706) 00:15:37.233 fused_ordering(707) 00:15:37.233 fused_ordering(708) 00:15:37.233 fused_ordering(709) 00:15:37.233 fused_ordering(710) 00:15:37.233 fused_ordering(711) 00:15:37.233 fused_ordering(712) 00:15:37.233 fused_ordering(713) 00:15:37.233 fused_ordering(714) 00:15:37.233 fused_ordering(715) 00:15:37.233 fused_ordering(716) 00:15:37.233 fused_ordering(717) 00:15:37.233 fused_ordering(718) 00:15:37.233 fused_ordering(719) 00:15:37.233 fused_ordering(720) 00:15:37.233 fused_ordering(721) 00:15:37.233 fused_ordering(722) 00:15:37.233 fused_ordering(723) 00:15:37.233 fused_ordering(724) 00:15:37.233 fused_ordering(725) 00:15:37.233 fused_ordering(726) 00:15:37.233 fused_ordering(727) 00:15:37.233 fused_ordering(728) 00:15:37.233 fused_ordering(729) 00:15:37.233 fused_ordering(730) 00:15:37.233 fused_ordering(731) 00:15:37.233 fused_ordering(732) 00:15:37.233 fused_ordering(733) 00:15:37.233 fused_ordering(734) 00:15:37.233 fused_ordering(735) 00:15:37.233 fused_ordering(736) 00:15:37.233 fused_ordering(737) 00:15:37.233 fused_ordering(738) 00:15:37.233 fused_ordering(739) 00:15:37.233 fused_ordering(740) 00:15:37.233 fused_ordering(741) 00:15:37.233 fused_ordering(742) 00:15:37.233 fused_ordering(743) 00:15:37.233 fused_ordering(744) 00:15:37.233 fused_ordering(745) 00:15:37.233 fused_ordering(746) 00:15:37.233 fused_ordering(747) 00:15:37.233 fused_ordering(748) 00:15:37.233 fused_ordering(749) 00:15:37.233 fused_ordering(750) 00:15:37.233 fused_ordering(751) 00:15:37.233 fused_ordering(752) 00:15:37.233 fused_ordering(753) 00:15:37.233 fused_ordering(754) 00:15:37.233 fused_ordering(755) 00:15:37.233 fused_ordering(756) 00:15:37.233 fused_ordering(757) 00:15:37.233 fused_ordering(758) 00:15:37.233 fused_ordering(759) 00:15:37.233 fused_ordering(760) 00:15:37.233 fused_ordering(761) 00:15:37.233 fused_ordering(762) 00:15:37.233 fused_ordering(763) 00:15:37.233 fused_ordering(764) 00:15:37.233 fused_ordering(765) 00:15:37.233 fused_ordering(766) 00:15:37.233 fused_ordering(767) 00:15:37.233 fused_ordering(768) 00:15:37.233 fused_ordering(769) 00:15:37.233 fused_ordering(770) 00:15:37.233 fused_ordering(771) 00:15:37.233 fused_ordering(772) 00:15:37.233 fused_ordering(773) 00:15:37.233 fused_ordering(774) 00:15:37.233 fused_ordering(775) 00:15:37.233 fused_ordering(776) 00:15:37.233 fused_ordering(777) 00:15:37.233 fused_ordering(778) 00:15:37.233 fused_ordering(779) 00:15:37.233 fused_ordering(780) 00:15:37.233 fused_ordering(781) 00:15:37.233 fused_ordering(782) 00:15:37.233 fused_ordering(783) 00:15:37.233 fused_ordering(784) 00:15:37.233 fused_ordering(785) 00:15:37.233 fused_ordering(786) 00:15:37.233 fused_ordering(787) 00:15:37.233 fused_ordering(788) 00:15:37.233 fused_ordering(789) 00:15:37.233 fused_ordering(790) 00:15:37.233 fused_ordering(791) 00:15:37.233 fused_ordering(792) 00:15:37.233 fused_ordering(793) 00:15:37.233 fused_ordering(794) 00:15:37.233 fused_ordering(795) 00:15:37.233 fused_ordering(796) 00:15:37.233 fused_ordering(797) 00:15:37.233 fused_ordering(798) 00:15:37.233 fused_ordering(799) 00:15:37.233 fused_ordering(800) 00:15:37.233 fused_ordering(801) 00:15:37.233 fused_ordering(802) 00:15:37.233 fused_ordering(803) 00:15:37.233 fused_ordering(804) 00:15:37.233 fused_ordering(805) 00:15:37.233 fused_ordering(806) 00:15:37.233 fused_ordering(807) 00:15:37.233 fused_ordering(808) 00:15:37.233 fused_ordering(809) 00:15:37.233 fused_ordering(810) 00:15:37.233 fused_ordering(811) 00:15:37.233 fused_ordering(812) 00:15:37.233 fused_ordering(813) 00:15:37.233 fused_ordering(814) 00:15:37.233 fused_ordering(815) 00:15:37.233 fused_ordering(816) 00:15:37.233 fused_ordering(817) 00:15:37.233 fused_ordering(818) 00:15:37.233 fused_ordering(819) 00:15:37.233 fused_ordering(820) 00:15:37.800 fused_ordering(821) 00:15:37.800 fused_ordering(822) 00:15:37.800 fused_ordering(823) 00:15:37.800 fused_ordering(824) 00:15:37.800 fused_ordering(825) 00:15:37.800 fused_ordering(826) 00:15:37.800 fused_ordering(827) 00:15:37.800 fused_ordering(828) 00:15:37.800 fused_ordering(829) 00:15:37.800 fused_ordering(830) 00:15:37.800 fused_ordering(831) 00:15:37.800 fused_ordering(832) 00:15:37.800 fused_ordering(833) 00:15:37.800 fused_ordering(834) 00:15:37.800 fused_ordering(835) 00:15:37.800 fused_ordering(836) 00:15:37.800 fused_ordering(837) 00:15:37.800 fused_ordering(838) 00:15:37.800 fused_ordering(839) 00:15:37.800 fused_ordering(840) 00:15:37.800 fused_ordering(841) 00:15:37.800 fused_ordering(842) 00:15:37.800 fused_ordering(843) 00:15:37.800 fused_ordering(844) 00:15:37.800 fused_ordering(845) 00:15:37.800 fused_ordering(846) 00:15:37.800 fused_ordering(847) 00:15:37.800 fused_ordering(848) 00:15:37.800 fused_ordering(849) 00:15:37.800 fused_ordering(850) 00:15:37.800 fused_ordering(851) 00:15:37.800 fused_ordering(852) 00:15:37.800 fused_ordering(853) 00:15:37.800 fused_ordering(854) 00:15:37.800 fused_ordering(855) 00:15:37.800 fused_ordering(856) 00:15:37.800 fused_ordering(857) 00:15:37.800 fused_ordering(858) 00:15:37.800 fused_ordering(859) 00:15:37.800 fused_ordering(860) 00:15:37.800 fused_ordering(861) 00:15:37.800 fused_ordering(862) 00:15:37.800 fused_ordering(863) 00:15:37.800 fused_ordering(864) 00:15:37.800 fused_ordering(865) 00:15:37.800 fused_ordering(866) 00:15:37.800 fused_ordering(867) 00:15:37.800 fused_ordering(868) 00:15:37.800 fused_ordering(869) 00:15:37.800 fused_ordering(870) 00:15:37.800 fused_ordering(871) 00:15:37.800 fused_ordering(872) 00:15:37.800 fused_ordering(873) 00:15:37.800 fused_ordering(874) 00:15:37.800 fused_ordering(875) 00:15:37.800 fused_ordering(876) 00:15:37.800 fused_ordering(877) 00:15:37.800 fused_ordering(878) 00:15:37.800 fused_ordering(879) 00:15:37.800 fused_ordering(880) 00:15:37.800 fused_ordering(881) 00:15:37.800 fused_ordering(882) 00:15:37.800 fused_ordering(883) 00:15:37.800 fused_ordering(884) 00:15:37.800 fused_ordering(885) 00:15:37.800 fused_ordering(886) 00:15:37.800 fused_ordering(887) 00:15:37.800 fused_ordering(888) 00:15:37.800 fused_ordering(889) 00:15:37.800 fused_ordering(890) 00:15:37.800 fused_ordering(891) 00:15:37.800 fused_ordering(892) 00:15:37.800 fused_ordering(893) 00:15:37.800 fused_ordering(894) 00:15:37.800 fused_ordering(895) 00:15:37.800 fused_ordering(896) 00:15:37.800 fused_ordering(897) 00:15:37.800 fused_ordering(898) 00:15:37.800 fused_ordering(899) 00:15:37.800 fused_ordering(900) 00:15:37.800 fused_ordering(901) 00:15:37.800 fused_ordering(902) 00:15:37.800 fused_ordering(903) 00:15:37.800 fused_ordering(904) 00:15:37.800 fused_ordering(905) 00:15:37.800 fused_ordering(906) 00:15:37.800 fused_ordering(907) 00:15:37.800 fused_ordering(908) 00:15:37.800 fused_ordering(909) 00:15:37.800 fused_ordering(910) 00:15:37.800 fused_ordering(911) 00:15:37.800 fused_ordering(912) 00:15:37.800 fused_ordering(913) 00:15:37.800 fused_ordering(914) 00:15:37.800 fused_ordering(915) 00:15:37.800 fused_ordering(916) 00:15:37.800 fused_ordering(917) 00:15:37.800 fused_ordering(918) 00:15:37.800 fused_ordering(919) 00:15:37.800 fused_ordering(920) 00:15:37.800 fused_ordering(921) 00:15:37.800 fused_ordering(922) 00:15:37.800 fused_ordering(923) 00:15:37.800 fused_ordering(924) 00:15:37.800 fused_ordering(925) 00:15:37.800 fused_ordering(926) 00:15:37.800 fused_ordering(927) 00:15:37.800 fused_ordering(928) 00:15:37.800 fused_ordering(929) 00:15:37.800 fused_ordering(930) 00:15:37.800 fused_ordering(931) 00:15:37.800 fused_ordering(932) 00:15:37.800 fused_ordering(933) 00:15:37.800 fused_ordering(934) 00:15:37.800 fused_ordering(935) 00:15:37.800 fused_ordering(936) 00:15:37.800 fused_ordering(937) 00:15:37.800 fused_ordering(938) 00:15:37.800 fused_ordering(939) 00:15:37.800 fused_ordering(940) 00:15:37.800 fused_ordering(941) 00:15:37.800 fused_ordering(942) 00:15:37.800 fused_ordering(943) 00:15:37.800 fused_ordering(944) 00:15:37.800 fused_ordering(945) 00:15:37.800 fused_ordering(946) 00:15:37.800 fused_ordering(947) 00:15:37.800 fused_ordering(948) 00:15:37.800 fused_ordering(949) 00:15:37.800 fused_ordering(950) 00:15:37.800 fused_ordering(951) 00:15:37.800 fused_ordering(952) 00:15:37.800 fused_ordering(953) 00:15:37.800 fused_ordering(954) 00:15:37.801 fused_ordering(955) 00:15:37.801 fused_ordering(956) 00:15:37.801 fused_ordering(957) 00:15:37.801 fused_ordering(958) 00:15:37.801 fused_ordering(959) 00:15:37.801 fused_ordering(960) 00:15:37.801 fused_ordering(961) 00:15:37.801 fused_ordering(962) 00:15:37.801 fused_ordering(963) 00:15:37.801 fused_ordering(964) 00:15:37.801 fused_ordering(965) 00:15:37.801 fused_ordering(966) 00:15:37.801 fused_ordering(967) 00:15:37.801 fused_ordering(968) 00:15:37.801 fused_ordering(969) 00:15:37.801 fused_ordering(970) 00:15:37.801 fused_ordering(971) 00:15:37.801 fused_ordering(972) 00:15:37.801 fused_ordering(973) 00:15:37.801 fused_ordering(974) 00:15:37.801 fused_ordering(975) 00:15:37.801 fused_ordering(976) 00:15:37.801 fused_ordering(977) 00:15:37.801 fused_ordering(978) 00:15:37.801 fused_ordering(979) 00:15:37.801 fused_ordering(980) 00:15:37.801 fused_ordering(981) 00:15:37.801 fused_ordering(982) 00:15:37.801 fused_ordering(983) 00:15:37.801 fused_ordering(984) 00:15:37.801 fused_ordering(985) 00:15:37.801 fused_ordering(986) 00:15:37.801 fused_ordering(987) 00:15:37.801 fused_ordering(988) 00:15:37.801 fused_ordering(989) 00:15:37.801 fused_ordering(990) 00:15:37.801 fused_ordering(991) 00:15:37.801 fused_ordering(992) 00:15:37.801 fused_ordering(993) 00:15:37.801 fused_ordering(994) 00:15:37.801 fused_ordering(995) 00:15:37.801 fused_ordering(996) 00:15:37.801 fused_ordering(997) 00:15:37.801 fused_ordering(998) 00:15:37.801 fused_ordering(999) 00:15:37.801 fused_ordering(1000) 00:15:37.801 fused_ordering(1001) 00:15:37.801 fused_ordering(1002) 00:15:37.801 fused_ordering(1003) 00:15:37.801 fused_ordering(1004) 00:15:37.801 fused_ordering(1005) 00:15:37.801 fused_ordering(1006) 00:15:37.801 fused_ordering(1007) 00:15:37.801 fused_ordering(1008) 00:15:37.801 fused_ordering(1009) 00:15:37.801 fused_ordering(1010) 00:15:37.801 fused_ordering(1011) 00:15:37.801 fused_ordering(1012) 00:15:37.801 fused_ordering(1013) 00:15:37.801 fused_ordering(1014) 00:15:37.801 fused_ordering(1015) 00:15:37.801 fused_ordering(1016) 00:15:37.801 fused_ordering(1017) 00:15:37.801 fused_ordering(1018) 00:15:37.801 fused_ordering(1019) 00:15:37.801 fused_ordering(1020) 00:15:37.801 fused_ordering(1021) 00:15:37.801 fused_ordering(1022) 00:15:37.801 fused_ordering(1023) 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:15:37.801 rmmod nvme_tcp 00:15:37.801 rmmod nvme_fabrics 00:15:37.801 rmmod nvme_keyring 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 2058143 ']' 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 2058143 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # '[' -z 2058143 ']' 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # kill -0 2058143 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # uname 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:15:37.801 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2058143 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2058143' 00:15:38.060 killing process with pid 2058143 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@971 -- # kill 2058143 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@976 -- # wait 2058143 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:38.060 06:26:09 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.602 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:15:40.602 00:15:40.602 real 0m7.581s 00:15:40.602 user 0m5.065s 00:15:40.602 sys 0m3.201s 00:15:40.602 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1128 -- # xtrace_disable 00:15:40.602 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:40.602 ************************************ 00:15:40.602 END TEST nvmf_fused_ordering 00:15:40.602 ************************************ 00:15:40.602 06:26:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:40.602 06:26:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:15:40.602 06:26:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:15:40.602 06:26:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:40.602 ************************************ 00:15:40.602 START TEST nvmf_ns_masking 00:15:40.602 ************************************ 00:15:40.602 06:26:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1127 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:40.602 * Looking for test storage... 00:15:40.602 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lcov --version 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:15:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.602 --rc genhtml_branch_coverage=1 00:15:40.602 --rc genhtml_function_coverage=1 00:15:40.602 --rc genhtml_legend=1 00:15:40.602 --rc geninfo_all_blocks=1 00:15:40.602 --rc geninfo_unexecuted_blocks=1 00:15:40.602 00:15:40.602 ' 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:15:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.602 --rc genhtml_branch_coverage=1 00:15:40.602 --rc genhtml_function_coverage=1 00:15:40.602 --rc genhtml_legend=1 00:15:40.602 --rc geninfo_all_blocks=1 00:15:40.602 --rc geninfo_unexecuted_blocks=1 00:15:40.602 00:15:40.602 ' 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:15:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.602 --rc genhtml_branch_coverage=1 00:15:40.602 --rc genhtml_function_coverage=1 00:15:40.602 --rc genhtml_legend=1 00:15:40.602 --rc geninfo_all_blocks=1 00:15:40.602 --rc geninfo_unexecuted_blocks=1 00:15:40.602 00:15:40.602 ' 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:15:40.602 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:40.602 --rc genhtml_branch_coverage=1 00:15:40.602 --rc genhtml_function_coverage=1 00:15:40.602 --rc genhtml_legend=1 00:15:40.602 --rc geninfo_all_blocks=1 00:15:40.602 --rc geninfo_unexecuted_blocks=1 00:15:40.602 00:15:40.602 ' 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:40.602 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:15:40.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f3d9ccc0-a817-4935-b882-ec0fc35c9e74 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d4c52cb7-5472-4425-86ed-28de59a8a481 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=16ae6f0f-ec26-4d1b-b502-72e49904d402 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:15:40.603 06:26:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:15:42.509 Found 0000:09:00.0 (0x8086 - 0x159b) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:15:42.509 Found 0000:09:00.1 (0x8086 - 0x159b) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:15:42.509 Found net devices under 0000:09:00.0: cvl_0_0 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:15:42.509 Found net devices under 0000:09:00.1: cvl_0_1 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:42.509 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:15:42.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:42.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.313 ms 00:15:42.772 00:15:42.772 --- 10.0.0.2 ping statistics --- 00:15:42.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.772 rtt min/avg/max/mdev = 0.313/0.313/0.313/0.000 ms 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:15:42.772 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:42.772 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:15:42.772 00:15:42.772 --- 10.0.0.1 ping statistics --- 00:15:42.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:42.772 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=2060501 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 2060501 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2060501 ']' 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:42.772 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:42.772 [2024-11-20 06:26:14.544345] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:15:42.772 [2024-11-20 06:26:14.544435] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:43.030 [2024-11-20 06:26:14.615655] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:43.030 [2024-11-20 06:26:14.669781] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:43.030 [2024-11-20 06:26:14.669837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:43.030 [2024-11-20 06:26:14.669865] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:43.030 [2024-11-20 06:26:14.669875] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:43.030 [2024-11-20 06:26:14.669884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:43.030 [2024-11-20 06:26:14.670496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:43.030 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:43.030 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:15:43.030 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:15:43.030 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:15:43.030 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:43.030 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:43.030 06:26:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:43.288 [2024-11-20 06:26:15.049090] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:43.288 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:43.288 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:43.288 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:43.854 Malloc1 00:15:43.855 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:44.113 Malloc2 00:15:44.113 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:44.371 06:26:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:44.630 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:44.888 [2024-11-20 06:26:16.511542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:44.888 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:44.888 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 16ae6f0f-ec26-4d1b-b502-72e49904d402 -a 10.0.0.2 -s 4420 -i 4 00:15:45.146 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:45.146 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:15:45.146 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:45.146 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:15:45.146 06:26:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.049 [ 0]:0x1 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.049 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.307 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed2ff438e5d483e9a2aa8380bf0d224 00:15:47.307 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed2ff438e5d483e9a2aa8380bf0d224 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.307 06:26:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:47.565 [ 0]:0x1 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed2ff438e5d483e9a2aa8380bf0d224 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed2ff438e5d483e9a2aa8380bf0d224 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:47.565 [ 1]:0x2 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf0a91ea135b4655b09b7583916e597a 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf0a91ea135b4655b09b7583916e597a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:47.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:47.565 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:48.129 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:15:48.387 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:15:48.387 06:26:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 16ae6f0f-ec26-4d1b-b502-72e49904d402 -a 10.0.0.2 -s 4420 -i 4 00:15:48.387 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:15:48.387 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:15:48.387 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:48.387 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 1 ]] 00:15:48.387 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=1 00:15:48.387 06:26:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:50.917 [ 0]:0x2 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf0a91ea135b4655b09b7583916e597a 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf0a91ea135b4655b09b7583916e597a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:50.917 [ 0]:0x1 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed2ff438e5d483e9a2aa8380bf0d224 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed2ff438e5d483e9a2aa8380bf0d224 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:50.917 [ 1]:0x2 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf0a91ea135b4655b09b7583916e597a 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf0a91ea135b4655b09b7583916e597a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:50.917 06:26:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:51.176 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:15:51.176 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:51.176 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:51.176 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:51.176 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.176 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:51.176 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:51.176 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:51.176 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:51.176 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:51.434 [ 0]:0x2 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf0a91ea135b4655b09b7583916e597a 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf0a91ea135b4655b09b7583916e597a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:51.434 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:51.434 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:52.000 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:15:52.000 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 16ae6f0f-ec26-4d1b-b502-72e49904d402 -a 10.0.0.2 -s 4420 -i 4 00:15:52.000 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:15:52.000 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # local i=0 00:15:52.000 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:15:52.000 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:15:52.000 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:15:52.000 06:26:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # sleep 2 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # return 0 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:54.527 [ 0]:0x1 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:54.527 06:26:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.527 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=fed2ff438e5d483e9a2aa8380bf0d224 00:15:54.527 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ fed2ff438e5d483e9a2aa8380bf0d224 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.527 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:15:54.527 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.527 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:54.527 [ 1]:0x2 00:15:54.527 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:54.527 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.527 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf0a91ea135b4655b09b7583916e597a 00:15:54.527 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf0a91ea135b4655b09b7583916e597a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.527 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:54.785 [ 0]:0x2 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf0a91ea135b4655b09b7583916e597a 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf0a91ea135b4655b09b7583916e597a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:54.785 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:15:55.044 [2024-11-20 06:26:26.806497] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:15:55.044 request: 00:15:55.044 { 00:15:55.044 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:15:55.044 "nsid": 2, 00:15:55.044 "host": "nqn.2016-06.io.spdk:host1", 00:15:55.044 "method": "nvmf_ns_remove_host", 00:15:55.044 "req_id": 1 00:15:55.044 } 00:15:55.044 Got JSON-RPC error response 00:15:55.044 response: 00:15:55.044 { 00:15:55.044 "code": -32602, 00:15:55.044 "message": "Invalid parameters" 00:15:55.044 } 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:15:55.044 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:15:55.302 [ 0]:0x2 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=bf0a91ea135b4655b09b7583916e597a 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ bf0a91ea135b4655b09b7583916e597a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:15:55.302 06:26:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:55.302 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.302 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2062122 00:15:55.302 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:15:55.302 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:15:55.302 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2062122 /var/tmp/host.sock 00:15:55.302 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # '[' -z 2062122 ']' 00:15:55.302 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:15:55.302 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # local max_retries=100 00:15:55.302 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:55.302 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:55.302 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # xtrace_disable 00:15:55.302 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:55.560 [2024-11-20 06:26:27.155294] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:15:55.560 [2024-11-20 06:26:27.155389] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2062122 ] 00:15:55.560 [2024-11-20 06:26:27.223344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:55.560 [2024-11-20 06:26:27.281597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.818 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:15:55.818 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@866 -- # return 0 00:15:55.818 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:56.075 06:26:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:56.334 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f3d9ccc0-a817-4935-b882-ec0fc35c9e74 00:15:56.334 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:56.334 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F3D9CCC0A8174935B882EC0FC35C9E74 -i 00:15:56.897 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d4c52cb7-5472-4425-86ed-28de59a8a481 00:15:56.897 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:56.897 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D4C52CB75472442586ED28DE59A8A481 -i 00:15:56.897 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:15:57.155 06:26:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:15:57.413 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:57.413 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:15:57.978 nvme0n1 00:15:57.978 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:57.978 06:26:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:15:58.544 nvme1n2 00:15:58.544 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:15:58.544 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:15:58.544 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:15:58.544 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:15:58.545 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:15:58.545 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:15:58.802 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:15:58.802 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:15:58.802 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:15:59.060 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f3d9ccc0-a817-4935-b882-ec0fc35c9e74 == \f\3\d\9\c\c\c\0\-\a\8\1\7\-\4\9\3\5\-\b\8\8\2\-\e\c\0\f\c\3\5\c\9\e\7\4 ]] 00:15:59.060 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:15:59.060 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:15:59.060 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:15:59.318 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d4c52cb7-5472-4425-86ed-28de59a8a481 == \d\4\c\5\2\c\b\7\-\5\4\7\2\-\4\4\2\5\-\8\6\e\d\-\2\8\d\e\5\9\a\8\a\4\8\1 ]] 00:15:59.318 06:26:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:59.576 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f3d9ccc0-a817-4935-b882-ec0fc35c9e74 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F3D9CCC0A8174935B882EC0FC35C9E74 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F3D9CCC0A8174935B882EC0FC35C9E74 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:15:59.834 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F3D9CCC0A8174935B882EC0FC35C9E74 00:16:00.092 [2024-11-20 06:26:31.728579] bdev.c:8480:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:00.092 [2024-11-20 06:26:31.728641] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:00.092 [2024-11-20 06:26:31.728662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:00.092 request: 00:16:00.092 { 00:16:00.092 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:00.092 "namespace": { 00:16:00.092 "bdev_name": "invalid", 00:16:00.092 "nsid": 1, 00:16:00.092 "nguid": "F3D9CCC0A8174935B882EC0FC35C9E74", 00:16:00.092 "no_auto_visible": false 00:16:00.092 }, 00:16:00.092 "method": "nvmf_subsystem_add_ns", 00:16:00.092 "req_id": 1 00:16:00.092 } 00:16:00.092 Got JSON-RPC error response 00:16:00.092 response: 00:16:00.092 { 00:16:00.092 "code": -32602, 00:16:00.092 "message": "Invalid parameters" 00:16:00.092 } 00:16:00.092 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:16:00.092 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:00.092 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:00.092 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:00.092 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f3d9ccc0-a817-4935-b882-ec0fc35c9e74 00:16:00.092 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:16:00.092 06:26:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F3D9CCC0A8174935B882EC0FC35C9E74 -i 00:16:00.350 06:26:32 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:02.251 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:02.251 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:02.251 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:02.816 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:02.816 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2062122 00:16:02.816 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2062122 ']' 00:16:02.816 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2062122 00:16:02.816 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:16:02.817 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:02.817 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2062122 00:16:02.817 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:16:02.817 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:16:02.817 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2062122' 00:16:02.817 killing process with pid 2062122 00:16:02.817 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2062122 00:16:02.817 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2062122 00:16:03.075 06:26:34 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:03.333 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:03.333 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:03.333 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:03.333 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:16:03.333 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:03.334 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:16:03.334 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:03.334 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:03.334 rmmod nvme_tcp 00:16:03.334 rmmod nvme_fabrics 00:16:03.592 rmmod nvme_keyring 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 2060501 ']' 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 2060501 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # '[' -z 2060501 ']' 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # kill -0 2060501 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # uname 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2060501 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2060501' 00:16:03.592 killing process with pid 2060501 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@971 -- # kill 2060501 00:16:03.592 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@976 -- # wait 2060501 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:03.851 06:26:35 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:05.758 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:05.758 00:16:05.758 real 0m25.599s 00:16:05.758 user 0m36.944s 00:16:05.758 sys 0m4.826s 00:16:05.758 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:05.758 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:05.758 ************************************ 00:16:05.758 END TEST nvmf_ns_masking 00:16:05.758 ************************************ 00:16:05.758 06:26:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:05.758 06:26:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:05.758 06:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:05.758 06:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:05.758 06:26:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:06.018 ************************************ 00:16:06.018 START TEST nvmf_nvme_cli 00:16:06.018 ************************************ 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:06.018 * Looking for test storage... 00:16:06.018 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lcov --version 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:06.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.018 --rc genhtml_branch_coverage=1 00:16:06.018 --rc genhtml_function_coverage=1 00:16:06.018 --rc genhtml_legend=1 00:16:06.018 --rc geninfo_all_blocks=1 00:16:06.018 --rc geninfo_unexecuted_blocks=1 00:16:06.018 00:16:06.018 ' 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:06.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.018 --rc genhtml_branch_coverage=1 00:16:06.018 --rc genhtml_function_coverage=1 00:16:06.018 --rc genhtml_legend=1 00:16:06.018 --rc geninfo_all_blocks=1 00:16:06.018 --rc geninfo_unexecuted_blocks=1 00:16:06.018 00:16:06.018 ' 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:06.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.018 --rc genhtml_branch_coverage=1 00:16:06.018 --rc genhtml_function_coverage=1 00:16:06.018 --rc genhtml_legend=1 00:16:06.018 --rc geninfo_all_blocks=1 00:16:06.018 --rc geninfo_unexecuted_blocks=1 00:16:06.018 00:16:06.018 ' 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:06.018 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:06.018 --rc genhtml_branch_coverage=1 00:16:06.018 --rc genhtml_function_coverage=1 00:16:06.018 --rc genhtml_legend=1 00:16:06.018 --rc geninfo_all_blocks=1 00:16:06.018 --rc geninfo_unexecuted_blocks=1 00:16:06.018 00:16:06.018 ' 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:06.018 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:06.019 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:16:06.019 06:26:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:16:08.559 Found 0000:09:00.0 (0x8086 - 0x159b) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:16:08.559 Found 0000:09:00.1 (0x8086 - 0x159b) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:16:08.559 Found net devices under 0000:09:00.0: cvl_0_0 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:16:08.559 Found net devices under 0000:09:00.1: cvl_0_1 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:16:08.559 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:08.560 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:08.560 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:16:08.560 00:16:08.560 --- 10.0.0.2 ping statistics --- 00:16:08.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.560 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:08.560 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:08.560 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.164 ms 00:16:08.560 00:16:08.560 --- 10.0.0.1 ping statistics --- 00:16:08.560 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:08.560 rtt min/avg/max/mdev = 0.164/0.164/0.164/0.000 ms 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:08.560 06:26:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=2065041 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 2065041 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # '[' -z 2065041 ']' 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:08.560 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.560 [2024-11-20 06:26:40.073721] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:16:08.560 [2024-11-20 06:26:40.073842] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:08.560 [2024-11-20 06:26:40.146259] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:08.560 [2024-11-20 06:26:40.208930] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:08.560 [2024-11-20 06:26:40.208980] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:08.560 [2024-11-20 06:26:40.209007] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:08.560 [2024-11-20 06:26:40.209019] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:08.560 [2024-11-20 06:26:40.209028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:08.560 [2024-11-20 06:26:40.210715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.560 [2024-11-20 06:26:40.210776] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:08.560 [2024-11-20 06:26:40.210853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:08.560 [2024-11-20 06:26:40.210857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@866 -- # return 0 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.560 [2024-11-20 06:26:40.365068] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:08.560 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.561 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.819 Malloc0 00:16:08.819 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.819 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:08.819 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.820 Malloc1 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.820 [2024-11-20 06:26:40.457413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -a 10.0.0.2 -s 4420 00:16:08.820 00:16:08.820 Discovery Log Number of Records 2, Generation counter 2 00:16:08.820 =====Discovery Log Entry 0====== 00:16:08.820 trtype: tcp 00:16:08.820 adrfam: ipv4 00:16:08.820 subtype: current discovery subsystem 00:16:08.820 treq: not required 00:16:08.820 portid: 0 00:16:08.820 trsvcid: 4420 00:16:08.820 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:08.820 traddr: 10.0.0.2 00:16:08.820 eflags: explicit discovery connections, duplicate discovery information 00:16:08.820 sectype: none 00:16:08.820 =====Discovery Log Entry 1====== 00:16:08.820 trtype: tcp 00:16:08.820 adrfam: ipv4 00:16:08.820 subtype: nvme subsystem 00:16:08.820 treq: not required 00:16:08.820 portid: 0 00:16:08.820 trsvcid: 4420 00:16:08.820 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:08.820 traddr: 10.0.0.2 00:16:08.820 eflags: none 00:16:08.820 sectype: none 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:08.820 06:26:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:09.754 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:09.754 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # local i=0 00:16:09.754 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:16:09.754 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # [[ -n 2 ]] 00:16:09.754 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # nvme_device_counter=2 00:16:09.754 06:26:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # sleep 2 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # nvme_devices=2 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # return 0 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:11.655 /dev/nvme0n2 ]] 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.655 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:11.914 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:12.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1221 -- # local i=0 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1233 -- # return 0 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:12.174 rmmod nvme_tcp 00:16:12.174 rmmod nvme_fabrics 00:16:12.174 rmmod nvme_keyring 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 2065041 ']' 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 2065041 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # '[' -z 2065041 ']' 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # kill -0 2065041 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # uname 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2065041 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2065041' 00:16:12.174 killing process with pid 2065041 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@971 -- # kill 2065041 00:16:12.174 06:26:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@976 -- # wait 2065041 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:12.434 06:26:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:14.974 00:16:14.974 real 0m8.652s 00:16:14.974 user 0m16.506s 00:16:14.974 sys 0m2.366s 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1128 -- # xtrace_disable 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:14.974 ************************************ 00:16:14.974 END TEST nvmf_nvme_cli 00:16:14.974 ************************************ 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:14.974 ************************************ 00:16:14.974 START TEST nvmf_vfio_user 00:16:14.974 ************************************ 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:14.974 * Looking for test storage... 00:16:14.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lcov --version 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:16:14.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.974 --rc genhtml_branch_coverage=1 00:16:14.974 --rc genhtml_function_coverage=1 00:16:14.974 --rc genhtml_legend=1 00:16:14.974 --rc geninfo_all_blocks=1 00:16:14.974 --rc geninfo_unexecuted_blocks=1 00:16:14.974 00:16:14.974 ' 00:16:14.974 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:16:14.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.975 --rc genhtml_branch_coverage=1 00:16:14.975 --rc genhtml_function_coverage=1 00:16:14.975 --rc genhtml_legend=1 00:16:14.975 --rc geninfo_all_blocks=1 00:16:14.975 --rc geninfo_unexecuted_blocks=1 00:16:14.975 00:16:14.975 ' 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:16:14.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.975 --rc genhtml_branch_coverage=1 00:16:14.975 --rc genhtml_function_coverage=1 00:16:14.975 --rc genhtml_legend=1 00:16:14.975 --rc geninfo_all_blocks=1 00:16:14.975 --rc geninfo_unexecuted_blocks=1 00:16:14.975 00:16:14.975 ' 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:16:14.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:14.975 --rc genhtml_branch_coverage=1 00:16:14.975 --rc genhtml_function_coverage=1 00:16:14.975 --rc genhtml_legend=1 00:16:14.975 --rc geninfo_all_blocks=1 00:16:14.975 --rc geninfo_unexecuted_blocks=1 00:16:14.975 00:16:14.975 ' 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:14.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2065972 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2065972' 00:16:14.975 Process pid: 2065972 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2065972 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2065972 ']' 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:14.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:16:14.975 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:14.975 [2024-11-20 06:26:46.557779] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:16:14.975 [2024-11-20 06:26:46.557866] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:14.975 [2024-11-20 06:26:46.627682] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:14.975 [2024-11-20 06:26:46.685705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:14.975 [2024-11-20 06:26:46.685771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:14.975 [2024-11-20 06:26:46.685797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:14.975 [2024-11-20 06:26:46.685810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:14.975 [2024-11-20 06:26:46.685820] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:14.975 [2024-11-20 06:26:46.687320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:14.975 [2024-11-20 06:26:46.687380] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:14.975 [2024-11-20 06:26:46.687444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:14.975 [2024-11-20 06:26:46.687447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:15.233 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:16:15.233 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:16:15.233 06:26:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:16.164 06:26:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:16.421 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:16.421 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:16.421 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:16.421 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:16.421 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:16.679 Malloc1 00:16:16.679 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:17.243 06:26:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:17.500 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:17.757 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:17.757 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:17.757 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:18.015 Malloc2 00:16:18.015 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:18.274 06:26:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:18.531 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:18.791 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:18.791 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:18.791 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:18.791 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:18.791 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:18.791 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:18.791 [2024-11-20 06:26:50.516581] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:16:18.791 [2024-11-20 06:26:50.516634] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2066483 ] 00:16:18.791 [2024-11-20 06:26:50.572477] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:18.791 [2024-11-20 06:26:50.581742] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:18.791 [2024-11-20 06:26:50.581776] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f901cad1000 00:16:18.791 [2024-11-20 06:26:50.582738] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:18.791 [2024-11-20 06:26:50.583749] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:18.791 [2024-11-20 06:26:50.584741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:18.791 [2024-11-20 06:26:50.585741] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:18.791 [2024-11-20 06:26:50.586746] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:18.791 [2024-11-20 06:26:50.587750] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:18.791 [2024-11-20 06:26:50.588759] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:18.791 [2024-11-20 06:26:50.589763] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:18.791 [2024-11-20 06:26:50.590767] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:18.791 [2024-11-20 06:26:50.590787] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f901cac6000 00:16:18.791 [2024-11-20 06:26:50.591909] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:18.791 [2024-11-20 06:26:50.605960] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:18.791 [2024-11-20 06:26:50.606002] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:18.791 [2024-11-20 06:26:50.614890] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:18.791 [2024-11-20 06:26:50.614954] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:18.791 [2024-11-20 06:26:50.615044] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:18.791 [2024-11-20 06:26:50.615071] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:18.791 [2024-11-20 06:26:50.615082] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:18.791 [2024-11-20 06:26:50.615889] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:18.791 [2024-11-20 06:26:50.615910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:18.791 [2024-11-20 06:26:50.615922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:18.791 [2024-11-20 06:26:50.616895] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:18.791 [2024-11-20 06:26:50.616914] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:18.791 [2024-11-20 06:26:50.616928] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:18.791 [2024-11-20 06:26:50.617901] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:18.791 [2024-11-20 06:26:50.617920] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:18.791 [2024-11-20 06:26:50.618908] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:18.792 [2024-11-20 06:26:50.618944] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:18.792 [2024-11-20 06:26:50.618953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:18.792 [2024-11-20 06:26:50.618966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:18.792 [2024-11-20 06:26:50.619076] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:18.792 [2024-11-20 06:26:50.619085] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:18.792 [2024-11-20 06:26:50.619094] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:18.792 [2024-11-20 06:26:50.619920] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:18.792 [2024-11-20 06:26:50.620921] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:18.792 [2024-11-20 06:26:50.621923] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:18.792 [2024-11-20 06:26:50.622919] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:19.087 [2024-11-20 06:26:50.623044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:19.087 [2024-11-20 06:26:50.623935] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:19.087 [2024-11-20 06:26:50.623958] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:19.087 [2024-11-20 06:26:50.623968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.623992] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:19.087 [2024-11-20 06:26:50.624006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.624029] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:19.087 [2024-11-20 06:26:50.624039] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:19.087 [2024-11-20 06:26:50.624045] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.087 [2024-11-20 06:26:50.624063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:19.087 [2024-11-20 06:26:50.624122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:19.087 [2024-11-20 06:26:50.624138] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:19.087 [2024-11-20 06:26:50.624146] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:19.087 [2024-11-20 06:26:50.624153] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:19.087 [2024-11-20 06:26:50.624161] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:19.087 [2024-11-20 06:26:50.624175] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:19.087 [2024-11-20 06:26:50.624183] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:19.087 [2024-11-20 06:26:50.624191] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.624206] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.624221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:19.087 [2024-11-20 06:26:50.624239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:19.087 [2024-11-20 06:26:50.624255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.087 [2024-11-20 06:26:50.624268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.087 [2024-11-20 06:26:50.624295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.087 [2024-11-20 06:26:50.624318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:19.087 [2024-11-20 06:26:50.624328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.624340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.624359] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:19.087 [2024-11-20 06:26:50.624373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:19.087 [2024-11-20 06:26:50.624389] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:19.087 [2024-11-20 06:26:50.624399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.624410] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.624420] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.624433] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:19.087 [2024-11-20 06:26:50.624446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:19.087 [2024-11-20 06:26:50.624515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.624532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:19.087 [2024-11-20 06:26:50.624546] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:19.087 [2024-11-20 06:26:50.624555] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:19.088 [2024-11-20 06:26:50.624561] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.088 [2024-11-20 06:26:50.624571] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:19.088 [2024-11-20 06:26:50.624586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:19.088 [2024-11-20 06:26:50.624617] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:19.088 [2024-11-20 06:26:50.624639] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624654] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624681] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:19.088 [2024-11-20 06:26:50.624690] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:19.088 [2024-11-20 06:26:50.624695] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.088 [2024-11-20 06:26:50.624704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:19.088 [2024-11-20 06:26:50.624731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:19.088 [2024-11-20 06:26:50.624753] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624767] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624783] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:19.088 [2024-11-20 06:26:50.624792] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:19.088 [2024-11-20 06:26:50.624798] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.088 [2024-11-20 06:26:50.624807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:19.088 [2024-11-20 06:26:50.624821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:19.088 [2024-11-20 06:26:50.624835] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624846] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624859] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624869] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624877] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624885] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624893] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:19.088 [2024-11-20 06:26:50.624901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:19.088 [2024-11-20 06:26:50.624909] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:19.088 [2024-11-20 06:26:50.624932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:19.088 [2024-11-20 06:26:50.624949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:19.088 [2024-11-20 06:26:50.624968] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:19.088 [2024-11-20 06:26:50.624980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:19.088 [2024-11-20 06:26:50.625009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:19.088 [2024-11-20 06:26:50.625022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:19.088 [2024-11-20 06:26:50.625038] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:19.088 [2024-11-20 06:26:50.625050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:19.088 [2024-11-20 06:26:50.625088] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:19.088 [2024-11-20 06:26:50.625099] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:19.088 [2024-11-20 06:26:50.625106] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:19.088 [2024-11-20 06:26:50.625112] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:19.088 [2024-11-20 06:26:50.625118] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:19.088 [2024-11-20 06:26:50.625131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:19.088 [2024-11-20 06:26:50.625145] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:19.088 [2024-11-20 06:26:50.625164] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:19.088 [2024-11-20 06:26:50.625170] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.088 [2024-11-20 06:26:50.625179] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:19.088 [2024-11-20 06:26:50.625191] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:19.088 [2024-11-20 06:26:50.625200] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:19.088 [2024-11-20 06:26:50.625206] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.088 [2024-11-20 06:26:50.625225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:19.088 [2024-11-20 06:26:50.625237] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:19.088 [2024-11-20 06:26:50.625246] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:19.088 [2024-11-20 06:26:50.625252] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:19.088 [2024-11-20 06:26:50.625261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:19.088 [2024-11-20 06:26:50.625274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:19.088 [2024-11-20 06:26:50.625295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:19.088 [2024-11-20 06:26:50.625338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:19.088 [2024-11-20 06:26:50.625353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:19.088 ===================================================== 00:16:19.088 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:19.088 ===================================================== 00:16:19.088 Controller Capabilities/Features 00:16:19.088 ================================ 00:16:19.088 Vendor ID: 4e58 00:16:19.088 Subsystem Vendor ID: 4e58 00:16:19.088 Serial Number: SPDK1 00:16:19.088 Model Number: SPDK bdev Controller 00:16:19.088 Firmware Version: 25.01 00:16:19.088 Recommended Arb Burst: 6 00:16:19.088 IEEE OUI Identifier: 8d 6b 50 00:16:19.088 Multi-path I/O 00:16:19.088 May have multiple subsystem ports: Yes 00:16:19.088 May have multiple controllers: Yes 00:16:19.088 Associated with SR-IOV VF: No 00:16:19.088 Max Data Transfer Size: 131072 00:16:19.088 Max Number of Namespaces: 32 00:16:19.088 Max Number of I/O Queues: 127 00:16:19.088 NVMe Specification Version (VS): 1.3 00:16:19.088 NVMe Specification Version (Identify): 1.3 00:16:19.088 Maximum Queue Entries: 256 00:16:19.088 Contiguous Queues Required: Yes 00:16:19.088 Arbitration Mechanisms Supported 00:16:19.088 Weighted Round Robin: Not Supported 00:16:19.088 Vendor Specific: Not Supported 00:16:19.088 Reset Timeout: 15000 ms 00:16:19.088 Doorbell Stride: 4 bytes 00:16:19.088 NVM Subsystem Reset: Not Supported 00:16:19.089 Command Sets Supported 00:16:19.089 NVM Command Set: Supported 00:16:19.089 Boot Partition: Not Supported 00:16:19.089 Memory Page Size Minimum: 4096 bytes 00:16:19.089 Memory Page Size Maximum: 4096 bytes 00:16:19.089 Persistent Memory Region: Not Supported 00:16:19.089 Optional Asynchronous Events Supported 00:16:19.089 Namespace Attribute Notices: Supported 00:16:19.089 Firmware Activation Notices: Not Supported 00:16:19.089 ANA Change Notices: Not Supported 00:16:19.089 PLE Aggregate Log Change Notices: Not Supported 00:16:19.089 LBA Status Info Alert Notices: Not Supported 00:16:19.089 EGE Aggregate Log Change Notices: Not Supported 00:16:19.089 Normal NVM Subsystem Shutdown event: Not Supported 00:16:19.089 Zone Descriptor Change Notices: Not Supported 00:16:19.089 Discovery Log Change Notices: Not Supported 00:16:19.089 Controller Attributes 00:16:19.089 128-bit Host Identifier: Supported 00:16:19.089 Non-Operational Permissive Mode: Not Supported 00:16:19.089 NVM Sets: Not Supported 00:16:19.089 Read Recovery Levels: Not Supported 00:16:19.089 Endurance Groups: Not Supported 00:16:19.089 Predictable Latency Mode: Not Supported 00:16:19.089 Traffic Based Keep ALive: Not Supported 00:16:19.089 Namespace Granularity: Not Supported 00:16:19.089 SQ Associations: Not Supported 00:16:19.089 UUID List: Not Supported 00:16:19.089 Multi-Domain Subsystem: Not Supported 00:16:19.089 Fixed Capacity Management: Not Supported 00:16:19.089 Variable Capacity Management: Not Supported 00:16:19.089 Delete Endurance Group: Not Supported 00:16:19.089 Delete NVM Set: Not Supported 00:16:19.089 Extended LBA Formats Supported: Not Supported 00:16:19.089 Flexible Data Placement Supported: Not Supported 00:16:19.089 00:16:19.089 Controller Memory Buffer Support 00:16:19.089 ================================ 00:16:19.089 Supported: No 00:16:19.089 00:16:19.089 Persistent Memory Region Support 00:16:19.089 ================================ 00:16:19.089 Supported: No 00:16:19.089 00:16:19.089 Admin Command Set Attributes 00:16:19.089 ============================ 00:16:19.089 Security Send/Receive: Not Supported 00:16:19.089 Format NVM: Not Supported 00:16:19.089 Firmware Activate/Download: Not Supported 00:16:19.089 Namespace Management: Not Supported 00:16:19.089 Device Self-Test: Not Supported 00:16:19.089 Directives: Not Supported 00:16:19.089 NVMe-MI: Not Supported 00:16:19.089 Virtualization Management: Not Supported 00:16:19.089 Doorbell Buffer Config: Not Supported 00:16:19.089 Get LBA Status Capability: Not Supported 00:16:19.089 Command & Feature Lockdown Capability: Not Supported 00:16:19.089 Abort Command Limit: 4 00:16:19.089 Async Event Request Limit: 4 00:16:19.089 Number of Firmware Slots: N/A 00:16:19.089 Firmware Slot 1 Read-Only: N/A 00:16:19.089 Firmware Activation Without Reset: N/A 00:16:19.089 Multiple Update Detection Support: N/A 00:16:19.089 Firmware Update Granularity: No Information Provided 00:16:19.089 Per-Namespace SMART Log: No 00:16:19.089 Asymmetric Namespace Access Log Page: Not Supported 00:16:19.089 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:19.089 Command Effects Log Page: Supported 00:16:19.089 Get Log Page Extended Data: Supported 00:16:19.089 Telemetry Log Pages: Not Supported 00:16:19.089 Persistent Event Log Pages: Not Supported 00:16:19.089 Supported Log Pages Log Page: May Support 00:16:19.089 Commands Supported & Effects Log Page: Not Supported 00:16:19.089 Feature Identifiers & Effects Log Page:May Support 00:16:19.089 NVMe-MI Commands & Effects Log Page: May Support 00:16:19.089 Data Area 4 for Telemetry Log: Not Supported 00:16:19.089 Error Log Page Entries Supported: 128 00:16:19.089 Keep Alive: Supported 00:16:19.089 Keep Alive Granularity: 10000 ms 00:16:19.089 00:16:19.089 NVM Command Set Attributes 00:16:19.089 ========================== 00:16:19.089 Submission Queue Entry Size 00:16:19.089 Max: 64 00:16:19.089 Min: 64 00:16:19.089 Completion Queue Entry Size 00:16:19.089 Max: 16 00:16:19.089 Min: 16 00:16:19.089 Number of Namespaces: 32 00:16:19.089 Compare Command: Supported 00:16:19.089 Write Uncorrectable Command: Not Supported 00:16:19.089 Dataset Management Command: Supported 00:16:19.089 Write Zeroes Command: Supported 00:16:19.089 Set Features Save Field: Not Supported 00:16:19.089 Reservations: Not Supported 00:16:19.089 Timestamp: Not Supported 00:16:19.089 Copy: Supported 00:16:19.089 Volatile Write Cache: Present 00:16:19.089 Atomic Write Unit (Normal): 1 00:16:19.089 Atomic Write Unit (PFail): 1 00:16:19.089 Atomic Compare & Write Unit: 1 00:16:19.089 Fused Compare & Write: Supported 00:16:19.089 Scatter-Gather List 00:16:19.089 SGL Command Set: Supported (Dword aligned) 00:16:19.089 SGL Keyed: Not Supported 00:16:19.089 SGL Bit Bucket Descriptor: Not Supported 00:16:19.089 SGL Metadata Pointer: Not Supported 00:16:19.089 Oversized SGL: Not Supported 00:16:19.089 SGL Metadata Address: Not Supported 00:16:19.089 SGL Offset: Not Supported 00:16:19.089 Transport SGL Data Block: Not Supported 00:16:19.089 Replay Protected Memory Block: Not Supported 00:16:19.089 00:16:19.089 Firmware Slot Information 00:16:19.089 ========================= 00:16:19.089 Active slot: 1 00:16:19.089 Slot 1 Firmware Revision: 25.01 00:16:19.089 00:16:19.089 00:16:19.089 Commands Supported and Effects 00:16:19.089 ============================== 00:16:19.089 Admin Commands 00:16:19.089 -------------- 00:16:19.089 Get Log Page (02h): Supported 00:16:19.089 Identify (06h): Supported 00:16:19.089 Abort (08h): Supported 00:16:19.089 Set Features (09h): Supported 00:16:19.089 Get Features (0Ah): Supported 00:16:19.089 Asynchronous Event Request (0Ch): Supported 00:16:19.089 Keep Alive (18h): Supported 00:16:19.089 I/O Commands 00:16:19.089 ------------ 00:16:19.089 Flush (00h): Supported LBA-Change 00:16:19.089 Write (01h): Supported LBA-Change 00:16:19.089 Read (02h): Supported 00:16:19.089 Compare (05h): Supported 00:16:19.089 Write Zeroes (08h): Supported LBA-Change 00:16:19.089 Dataset Management (09h): Supported LBA-Change 00:16:19.089 Copy (19h): Supported LBA-Change 00:16:19.089 00:16:19.089 Error Log 00:16:19.089 ========= 00:16:19.089 00:16:19.089 Arbitration 00:16:19.089 =========== 00:16:19.089 Arbitration Burst: 1 00:16:19.089 00:16:19.089 Power Management 00:16:19.089 ================ 00:16:19.089 Number of Power States: 1 00:16:19.089 Current Power State: Power State #0 00:16:19.089 Power State #0: 00:16:19.089 Max Power: 0.00 W 00:16:19.089 Non-Operational State: Operational 00:16:19.089 Entry Latency: Not Reported 00:16:19.089 Exit Latency: Not Reported 00:16:19.089 Relative Read Throughput: 0 00:16:19.089 Relative Read Latency: 0 00:16:19.089 Relative Write Throughput: 0 00:16:19.089 Relative Write Latency: 0 00:16:19.089 Idle Power: Not Reported 00:16:19.089 Active Power: Not Reported 00:16:19.089 Non-Operational Permissive Mode: Not Supported 00:16:19.089 00:16:19.089 Health Information 00:16:19.089 ================== 00:16:19.089 Critical Warnings: 00:16:19.089 Available Spare Space: OK 00:16:19.089 Temperature: OK 00:16:19.089 Device Reliability: OK 00:16:19.089 Read Only: No 00:16:19.089 Volatile Memory Backup: OK 00:16:19.089 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:19.089 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:19.089 Available Spare: 0% 00:16:19.089 Available Sp[2024-11-20 06:26:50.625479] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:19.089 [2024-11-20 06:26:50.625496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:19.089 [2024-11-20 06:26:50.625538] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:19.089 [2024-11-20 06:26:50.625557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.089 [2024-11-20 06:26:50.625568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.089 [2024-11-20 06:26:50.625579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.089 [2024-11-20 06:26:50.625589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:19.089 [2024-11-20 06:26:50.625955] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:19.089 [2024-11-20 06:26:50.625974] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:19.089 [2024-11-20 06:26:50.626953] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:19.089 [2024-11-20 06:26:50.627036] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:19.089 [2024-11-20 06:26:50.627051] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:19.089 [2024-11-20 06:26:50.627963] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:19.089 [2024-11-20 06:26:50.627986] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:19.089 [2024-11-20 06:26:50.628043] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:19.089 [2024-11-20 06:26:50.634318] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:19.089 are Threshold: 0% 00:16:19.089 Life Percentage Used: 0% 00:16:19.089 Data Units Read: 0 00:16:19.089 Data Units Written: 0 00:16:19.089 Host Read Commands: 0 00:16:19.089 Host Write Commands: 0 00:16:19.089 Controller Busy Time: 0 minutes 00:16:19.089 Power Cycles: 0 00:16:19.089 Power On Hours: 0 hours 00:16:19.089 Unsafe Shutdowns: 0 00:16:19.089 Unrecoverable Media Errors: 0 00:16:19.089 Lifetime Error Log Entries: 0 00:16:19.089 Warning Temperature Time: 0 minutes 00:16:19.089 Critical Temperature Time: 0 minutes 00:16:19.089 00:16:19.089 Number of Queues 00:16:19.089 ================ 00:16:19.089 Number of I/O Submission Queues: 127 00:16:19.089 Number of I/O Completion Queues: 127 00:16:19.089 00:16:19.089 Active Namespaces 00:16:19.089 ================= 00:16:19.089 Namespace ID:1 00:16:19.089 Error Recovery Timeout: Unlimited 00:16:19.089 Command Set Identifier: NVM (00h) 00:16:19.089 Deallocate: Supported 00:16:19.089 Deallocated/Unwritten Error: Not Supported 00:16:19.089 Deallocated Read Value: Unknown 00:16:19.089 Deallocate in Write Zeroes: Not Supported 00:16:19.089 Deallocated Guard Field: 0xFFFF 00:16:19.089 Flush: Supported 00:16:19.090 Reservation: Supported 00:16:19.090 Namespace Sharing Capabilities: Multiple Controllers 00:16:19.090 Size (in LBAs): 131072 (0GiB) 00:16:19.090 Capacity (in LBAs): 131072 (0GiB) 00:16:19.090 Utilization (in LBAs): 131072 (0GiB) 00:16:19.090 NGUID: B6775D0464114C1F9599B35DA7BB9BA6 00:16:19.090 UUID: b6775d04-6411-4c1f-9599-b35da7bb9ba6 00:16:19.090 Thin Provisioning: Not Supported 00:16:19.090 Per-NS Atomic Units: Yes 00:16:19.090 Atomic Boundary Size (Normal): 0 00:16:19.090 Atomic Boundary Size (PFail): 0 00:16:19.090 Atomic Boundary Offset: 0 00:16:19.090 Maximum Single Source Range Length: 65535 00:16:19.090 Maximum Copy Length: 65535 00:16:19.090 Maximum Source Range Count: 1 00:16:19.090 NGUID/EUI64 Never Reused: No 00:16:19.090 Namespace Write Protected: No 00:16:19.090 Number of LBA Formats: 1 00:16:19.090 Current LBA Format: LBA Format #00 00:16:19.090 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:19.090 00:16:19.090 06:26:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:19.090 [2024-11-20 06:26:50.886233] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:24.410 Initializing NVMe Controllers 00:16:24.410 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:24.410 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:24.410 Initialization complete. Launching workers. 00:16:24.410 ======================================================== 00:16:24.410 Latency(us) 00:16:24.410 Device Information : IOPS MiB/s Average min max 00:16:24.410 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33620.79 131.33 3808.09 1183.45 7509.29 00:16:24.410 ======================================================== 00:16:24.410 Total : 33620.79 131.33 3808.09 1183.45 7509.29 00:16:24.410 00:16:24.410 [2024-11-20 06:26:55.908899] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:24.410 06:26:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:24.410 [2024-11-20 06:26:56.152102] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:29.670 Initializing NVMe Controllers 00:16:29.670 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:29.670 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:29.670 Initialization complete. Launching workers. 00:16:29.670 ======================================================== 00:16:29.670 Latency(us) 00:16:29.670 Device Information : IOPS MiB/s Average min max 00:16:29.670 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16051.20 62.70 7982.67 6938.41 11027.89 00:16:29.670 ======================================================== 00:16:29.670 Total : 16051.20 62.70 7982.67 6938.41 11027.89 00:16:29.670 00:16:29.670 [2024-11-20 06:27:01.187076] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:29.670 06:27:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:29.670 [2024-11-20 06:27:01.415161] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:34.933 [2024-11-20 06:27:06.485685] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:34.933 Initializing NVMe Controllers 00:16:34.933 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:34.933 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:34.933 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:34.933 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:34.933 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:34.933 Initialization complete. Launching workers. 00:16:34.933 Starting thread on core 2 00:16:34.933 Starting thread on core 3 00:16:34.933 Starting thread on core 1 00:16:34.933 06:27:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:35.190 [2024-11-20 06:27:06.814426] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:38.469 [2024-11-20 06:27:10.087570] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:38.469 Initializing NVMe Controllers 00:16:38.469 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:38.469 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:38.469 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:38.469 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:38.469 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:38.469 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:38.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:38.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:38.469 Initialization complete. Launching workers. 00:16:38.469 Starting thread on core 1 with urgent priority queue 00:16:38.469 Starting thread on core 2 with urgent priority queue 00:16:38.469 Starting thread on core 3 with urgent priority queue 00:16:38.469 Starting thread on core 0 with urgent priority queue 00:16:38.470 SPDK bdev Controller (SPDK1 ) core 0: 4545.67 IO/s 22.00 secs/100000 ios 00:16:38.470 SPDK bdev Controller (SPDK1 ) core 1: 4561.67 IO/s 21.92 secs/100000 ios 00:16:38.470 SPDK bdev Controller (SPDK1 ) core 2: 4568.67 IO/s 21.89 secs/100000 ios 00:16:38.470 SPDK bdev Controller (SPDK1 ) core 3: 5048.33 IO/s 19.81 secs/100000 ios 00:16:38.470 ======================================================== 00:16:38.470 00:16:38.470 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:38.727 [2024-11-20 06:27:10.400819] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:38.727 Initializing NVMe Controllers 00:16:38.727 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:38.727 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:38.727 Namespace ID: 1 size: 0GB 00:16:38.727 Initialization complete. 00:16:38.727 INFO: using host memory buffer for IO 00:16:38.727 Hello world! 00:16:38.727 [2024-11-20 06:27:10.438517] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:38.727 06:27:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:38.985 [2024-11-20 06:27:10.748752] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:40.359 Initializing NVMe Controllers 00:16:40.359 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:40.359 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:40.359 Initialization complete. Launching workers. 00:16:40.359 submit (in ns) avg, min, max = 9382.7, 3558.9, 4019215.6 00:16:40.359 complete (in ns) avg, min, max = 23623.6, 2063.3, 4102094.4 00:16:40.359 00:16:40.359 Submit histogram 00:16:40.359 ================ 00:16:40.359 Range in us Cumulative Count 00:16:40.359 3.556 - 3.579: 0.5799% ( 77) 00:16:40.359 3.579 - 3.603: 4.4585% ( 515) 00:16:40.359 3.603 - 3.627: 12.7278% ( 1098) 00:16:40.359 3.627 - 3.650: 25.5234% ( 1699) 00:16:40.359 3.650 - 3.674: 34.8923% ( 1244) 00:16:40.359 3.674 - 3.698: 42.5516% ( 1017) 00:16:40.359 3.698 - 3.721: 48.2753% ( 760) 00:16:40.359 3.721 - 3.745: 53.1933% ( 653) 00:16:40.359 3.745 - 3.769: 57.6518% ( 592) 00:16:40.359 3.769 - 3.793: 61.3421% ( 490) 00:16:40.359 3.793 - 3.816: 64.6558% ( 440) 00:16:40.359 3.816 - 3.840: 67.6834% ( 402) 00:16:40.359 3.840 - 3.864: 71.9009% ( 560) 00:16:40.359 3.864 - 3.887: 76.7284% ( 641) 00:16:40.359 3.887 - 3.911: 81.1945% ( 593) 00:16:40.359 3.911 - 3.935: 84.4028% ( 426) 00:16:40.359 3.935 - 3.959: 86.3006% ( 252) 00:16:40.359 3.959 - 3.982: 88.2588% ( 260) 00:16:40.359 3.982 - 4.006: 90.0738% ( 241) 00:16:40.359 4.006 - 4.030: 91.3014% ( 163) 00:16:40.359 4.030 - 4.053: 92.5215% ( 162) 00:16:40.359 4.053 - 4.077: 93.4478% ( 123) 00:16:40.359 4.077 - 4.101: 94.2386% ( 105) 00:16:40.359 4.101 - 4.124: 95.0444% ( 107) 00:16:40.359 4.124 - 4.148: 95.6243% ( 77) 00:16:40.359 4.148 - 4.172: 96.0536% ( 57) 00:16:40.359 4.172 - 4.196: 96.3172% ( 35) 00:16:40.359 4.196 - 4.219: 96.5206% ( 27) 00:16:40.359 4.219 - 4.243: 96.6712% ( 20) 00:16:40.359 4.243 - 4.267: 96.7616% ( 12) 00:16:40.359 4.267 - 4.290: 96.9047% ( 19) 00:16:40.359 4.290 - 4.314: 97.0854% ( 24) 00:16:40.359 4.314 - 4.338: 97.1607% ( 10) 00:16:40.359 4.338 - 4.361: 97.2285% ( 9) 00:16:40.359 4.361 - 4.385: 97.3339% ( 14) 00:16:40.359 4.385 - 4.409: 97.3641% ( 4) 00:16:40.359 4.409 - 4.433: 97.4017% ( 5) 00:16:40.359 4.433 - 4.456: 97.4469% ( 6) 00:16:40.359 4.456 - 4.480: 97.4695% ( 3) 00:16:40.359 4.504 - 4.527: 97.4996% ( 4) 00:16:40.359 4.551 - 4.575: 97.5147% ( 2) 00:16:40.359 4.575 - 4.599: 97.5448% ( 4) 00:16:40.359 4.599 - 4.622: 97.5599% ( 2) 00:16:40.359 4.622 - 4.646: 97.6427% ( 11) 00:16:40.359 4.646 - 4.670: 97.6728% ( 4) 00:16:40.359 4.670 - 4.693: 97.6804% ( 1) 00:16:40.359 4.693 - 4.717: 97.7180% ( 5) 00:16:40.359 4.717 - 4.741: 97.7707% ( 7) 00:16:40.359 4.741 - 4.764: 97.7858% ( 2) 00:16:40.359 4.764 - 4.788: 97.8310% ( 6) 00:16:40.359 4.788 - 4.812: 97.8536% ( 3) 00:16:40.359 4.812 - 4.836: 97.8912% ( 5) 00:16:40.359 4.836 - 4.859: 97.9364% ( 6) 00:16:40.359 4.859 - 4.883: 97.9515% ( 2) 00:16:40.359 4.883 - 4.907: 97.9666% ( 2) 00:16:40.359 4.907 - 4.930: 97.9892% ( 3) 00:16:40.359 4.930 - 4.954: 98.0343% ( 6) 00:16:40.359 4.978 - 5.001: 98.0494% ( 2) 00:16:40.359 5.001 - 5.025: 98.0645% ( 2) 00:16:40.359 5.025 - 5.049: 98.0795% ( 2) 00:16:40.359 5.049 - 5.073: 98.1247% ( 6) 00:16:40.359 5.073 - 5.096: 98.1398% ( 2) 00:16:40.359 5.096 - 5.120: 98.1473% ( 1) 00:16:40.359 5.120 - 5.144: 98.1624% ( 2) 00:16:40.359 5.144 - 5.167: 98.1774% ( 2) 00:16:40.359 5.167 - 5.191: 98.1850% ( 1) 00:16:40.359 5.215 - 5.239: 98.1925% ( 1) 00:16:40.359 5.262 - 5.286: 98.2000% ( 1) 00:16:40.359 5.286 - 5.310: 98.2076% ( 1) 00:16:40.359 5.428 - 5.452: 98.2226% ( 2) 00:16:40.359 5.452 - 5.476: 98.2302% ( 1) 00:16:40.359 5.476 - 5.499: 98.2377% ( 1) 00:16:40.359 5.594 - 5.618: 98.2452% ( 1) 00:16:40.359 5.689 - 5.713: 98.2527% ( 1) 00:16:40.359 5.736 - 5.760: 98.2603% ( 1) 00:16:40.359 5.760 - 5.784: 98.2753% ( 2) 00:16:40.359 5.807 - 5.831: 98.2829% ( 1) 00:16:40.359 5.831 - 5.855: 98.2904% ( 1) 00:16:40.359 5.902 - 5.926: 98.2979% ( 1) 00:16:40.359 5.973 - 5.997: 98.3055% ( 1) 00:16:40.359 6.044 - 6.068: 98.3205% ( 2) 00:16:40.359 6.068 - 6.116: 98.3281% ( 1) 00:16:40.359 6.353 - 6.400: 98.3356% ( 1) 00:16:40.359 6.447 - 6.495: 98.3431% ( 1) 00:16:40.359 6.921 - 6.969: 98.3507% ( 1) 00:16:40.359 7.016 - 7.064: 98.3582% ( 1) 00:16:40.359 7.159 - 7.206: 98.3657% ( 1) 00:16:40.359 7.206 - 7.253: 98.3732% ( 1) 00:16:40.359 7.253 - 7.301: 98.3883% ( 2) 00:16:40.359 7.301 - 7.348: 98.3958% ( 1) 00:16:40.359 7.348 - 7.396: 98.4034% ( 1) 00:16:40.359 7.490 - 7.538: 98.4109% ( 1) 00:16:40.359 7.633 - 7.680: 98.4184% ( 1) 00:16:40.359 7.680 - 7.727: 98.4260% ( 1) 00:16:40.359 7.775 - 7.822: 98.4335% ( 1) 00:16:40.359 7.822 - 7.870: 98.4410% ( 1) 00:16:40.359 7.870 - 7.917: 98.4561% ( 2) 00:16:40.359 7.964 - 8.012: 98.4636% ( 1) 00:16:40.359 8.059 - 8.107: 98.4712% ( 1) 00:16:40.359 8.107 - 8.154: 98.4787% ( 1) 00:16:40.359 8.201 - 8.249: 98.4862% ( 1) 00:16:40.359 8.439 - 8.486: 98.4937% ( 1) 00:16:40.359 8.486 - 8.533: 98.5013% ( 1) 00:16:40.359 8.581 - 8.628: 98.5088% ( 1) 00:16:40.359 8.628 - 8.676: 98.5163% ( 1) 00:16:40.359 8.676 - 8.723: 98.5239% ( 1) 00:16:40.359 8.723 - 8.770: 98.5314% ( 1) 00:16:40.359 8.960 - 9.007: 98.5389% ( 1) 00:16:40.359 9.102 - 9.150: 98.5540% ( 2) 00:16:40.359 9.150 - 9.197: 98.5615% ( 1) 00:16:40.359 9.197 - 9.244: 98.5766% ( 2) 00:16:40.359 9.387 - 9.434: 98.5841% ( 1) 00:16:40.359 9.434 - 9.481: 98.5992% ( 2) 00:16:40.359 9.576 - 9.624: 98.6067% ( 1) 00:16:40.359 9.624 - 9.671: 98.6218% ( 2) 00:16:40.359 9.813 - 9.861: 98.6368% ( 2) 00:16:40.359 9.956 - 10.003: 98.6444% ( 1) 00:16:40.359 10.193 - 10.240: 98.6594% ( 2) 00:16:40.359 10.240 - 10.287: 98.6670% ( 1) 00:16:40.359 10.335 - 10.382: 98.6745% ( 1) 00:16:40.360 10.382 - 10.430: 98.6820% ( 1) 00:16:40.360 10.430 - 10.477: 98.6896% ( 1) 00:16:40.360 10.761 - 10.809: 98.7046% ( 2) 00:16:40.360 10.951 - 10.999: 98.7122% ( 1) 00:16:40.360 11.093 - 11.141: 98.7272% ( 2) 00:16:40.360 11.188 - 11.236: 98.7347% ( 1) 00:16:40.360 11.236 - 11.283: 98.7423% ( 1) 00:16:40.360 11.662 - 11.710: 98.7498% ( 1) 00:16:40.360 11.852 - 11.899: 98.7573% ( 1) 00:16:40.360 12.136 - 12.231: 98.7649% ( 1) 00:16:40.360 12.231 - 12.326: 98.7724% ( 1) 00:16:40.360 12.516 - 12.610: 98.7875% ( 2) 00:16:40.360 12.705 - 12.800: 98.8025% ( 2) 00:16:40.360 12.895 - 12.990: 98.8101% ( 1) 00:16:40.360 14.127 - 14.222: 98.8176% ( 1) 00:16:40.360 14.222 - 14.317: 98.8327% ( 2) 00:16:40.360 14.507 - 14.601: 98.8402% ( 1) 00:16:40.360 14.601 - 14.696: 98.8552% ( 2) 00:16:40.360 14.886 - 14.981: 98.8628% ( 1) 00:16:40.360 15.076 - 15.170: 98.8703% ( 1) 00:16:40.360 15.550 - 15.644: 98.8778% ( 1) 00:16:40.360 15.644 - 15.739: 98.8854% ( 1) 00:16:40.360 16.972 - 17.067: 98.8929% ( 1) 00:16:40.360 17.161 - 17.256: 98.9004% ( 1) 00:16:40.360 17.256 - 17.351: 98.9080% ( 1) 00:16:40.360 17.351 - 17.446: 98.9306% ( 3) 00:16:40.360 17.446 - 17.541: 98.9381% ( 1) 00:16:40.360 17.541 - 17.636: 98.9607% ( 3) 00:16:40.360 17.636 - 17.730: 99.0059% ( 6) 00:16:40.360 17.730 - 17.825: 99.0435% ( 5) 00:16:40.360 17.825 - 17.920: 99.1113% ( 9) 00:16:40.360 17.920 - 18.015: 99.1640% ( 7) 00:16:40.360 18.015 - 18.110: 99.2243% ( 8) 00:16:40.360 18.110 - 18.204: 99.3222% ( 13) 00:16:40.360 18.204 - 18.299: 99.4050% ( 11) 00:16:40.360 18.299 - 18.394: 99.4577% ( 7) 00:16:40.360 18.394 - 18.489: 99.5481% ( 12) 00:16:40.360 18.489 - 18.584: 99.6159% ( 9) 00:16:40.360 18.584 - 18.679: 99.6310% ( 2) 00:16:40.360 18.679 - 18.773: 99.6762% ( 6) 00:16:40.360 18.773 - 18.868: 99.7138% ( 5) 00:16:40.360 18.868 - 18.963: 99.7439% ( 4) 00:16:40.360 18.963 - 19.058: 99.7590% ( 2) 00:16:40.360 19.058 - 19.153: 99.7816% ( 3) 00:16:40.360 19.153 - 19.247: 99.7891% ( 1) 00:16:40.360 19.342 - 19.437: 99.7967% ( 1) 00:16:40.360 19.532 - 19.627: 99.8042% ( 1) 00:16:40.360 19.627 - 19.721: 99.8192% ( 2) 00:16:40.360 19.911 - 20.006: 99.8268% ( 1) 00:16:40.360 20.575 - 20.670: 99.8343% ( 1) 00:16:40.360 20.954 - 21.049: 99.8418% ( 1) 00:16:40.360 22.471 - 22.566: 99.8494% ( 1) 00:16:40.360 23.609 - 23.704: 99.8569% ( 1) 00:16:40.360 23.988 - 24.083: 99.8644% ( 1) 00:16:40.360 3980.705 - 4004.978: 99.9623% ( 13) 00:16:40.360 4004.978 - 4029.250: 100.0000% ( 5) 00:16:40.360 00:16:40.360 Complete histogram 00:16:40.360 ================== 00:16:40.360 Range in us Cumulative Count 00:16:40.360 2.062 - 2.074: 7.6668% ( 1018) 00:16:40.360 2.074 - 2.086: 40.0663% ( 4302) 00:16:40.360 2.086 - 2.098: 43.0035% ( 390) 00:16:40.360 2.098 - 2.110: 50.5197% ( 998) 00:16:40.360 2.110 - 2.121: 58.6308% ( 1077) 00:16:40.360 2.121 - 2.133: 59.8358% ( 160) 00:16:40.360 2.133 - 2.145: 66.9529% ( 945) 00:16:40.360 2.145 - 2.157: 75.2824% ( 1106) 00:16:40.360 2.157 - 2.169: 76.2690% ( 131) 00:16:40.360 2.169 - 2.181: 79.3493% ( 409) 00:16:40.360 2.181 - 2.193: 81.3376% ( 264) 00:16:40.360 2.193 - 2.204: 81.7216% ( 51) 00:16:40.360 2.204 - 2.216: 84.2747% ( 339) 00:16:40.360 2.216 - 2.228: 88.3943% ( 547) 00:16:40.360 2.228 - 2.240: 90.3223% ( 256) 00:16:40.360 2.240 - 2.252: 92.1750% ( 246) 00:16:40.360 2.252 - 2.264: 93.2746% ( 146) 00:16:40.360 2.264 - 2.276: 93.5307% ( 34) 00:16:40.360 2.276 - 2.287: 93.9072% ( 50) 00:16:40.360 2.287 - 2.299: 94.3516% ( 59) 00:16:40.360 2.299 - 2.311: 95.0972% ( 99) 00:16:40.360 2.311 - 2.323: 95.4135% ( 42) 00:16:40.360 2.323 - 2.335: 95.5038% ( 12) 00:16:40.360 2.335 - 2.347: 95.5189% ( 2) 00:16:40.360 2.347 - 2.359: 95.6093% ( 12) 00:16:40.360 2.359 - 2.370: 95.8202% ( 28) 00:16:40.360 2.370 - 2.382: 96.1967% ( 50) 00:16:40.360 2.382 - 2.394: 96.6335% ( 58) 00:16:40.360 2.394 - 2.406: 96.8293% ( 26) 00:16:40.360 2.406 - 2.418: 97.0101% ( 24) 00:16:40.360 2.418 - 2.430: 97.1908% ( 24) 00:16:40.360 2.430 - 2.441: 97.3867% ( 26) 00:16:40.360 2.441 - 2.453: 97.5599% ( 23) 00:16:40.360 2.453 - 2.465: 97.8009% ( 32) 00:16:40.360 2.465 - 2.477: 97.9364% ( 18) 00:16:40.360 2.477 - 2.489: 98.0193% ( 11) 00:16:40.360 2.489 - 2.501: 98.1247% ( 14) 00:16:40.360 2.501 - 2.513: 98.2226% ( 13) 00:16:40.360 2.513 - 2.524: 98.2979% ( 10) 00:16:40.360 2.524 - 2.536: 98.3356% ( 5) 00:16:40.360 2.536 - 2.548: 98.4034% ( 9) 00:16:40.360 2.548 - 2.560: 98.4109% ( 1) 00:16:40.360 2.560 - 2.572: 98.4260% ( 2) 00:16:40.360 2.572 - 2.584: 98.4486% ( 3) 00:16:40.360 2.584 - 2.596: 98.4561% ( 1) 00:16:40.360 2.596 - 2.607: 98.4787% ( 3) 00:16:40.360 2.619 - 2.631: 98.4862% ( 1) 00:16:40.360 2.631 - 2.643: 98.4937% ( 1) 00:16:40.360 2.679 - 2.690: 98.5088% ( 2) 00:16:40.360 2.690 - 2.702: 98.5163% ( 1) 00:16:40.360 2.726 - 2.738: 98.5239% ( 1) 00:16:40.360 2.738 - 2.750: 98.5465% ( 3) 00:16:40.360 2.892 - 2.904: 98.5540% ( 1) 00:16:40.360 2.987 - 2.999: 98.5615% ( 1) 00:16:40.360 3.034 - 3.058: 98.5691% ( 1) 00:16:40.360 3.081 - 3.105: 98.5766% ( 1) 00:16:40.360 3.200 - 3.224: 98.5841% ( 1) 00:16:40.360 3.413 - 3.437: 98.5917% ( 1) 00:16:40.360 3.437 - 3.461: 98.5992% ( 1) 00:16:40.360 3.461 - 3.484: 98.6067% ( 1) 00:16:40.360 3.484 - 3.508: 98.6142% ( 1) 00:16:40.360 3.508 - 3.532: 98.6368% ( 3) 00:16:40.360 3.532 - 3.556: 98.6519% ( 2) 00:16:40.360 3.556 - 3.579: 98.6594% ( 1) 00:16:40.360 3.579 - 3.603: 98.6670% ( 1) 00:16:40.360 3.627 - 3.650: 98.6745% ( 1) 00:16:40.360 3.650 - 3.674: 98.6896% ( 2) 00:16:40.360 3.674 - 3.698: 98.6971% ( 1) 00:16:40.360 3.721 - 3.745: 98.7046% ( 1) 00:16:40.360 3.745 - 3.769: 98.7197% ( 2) 00:16:40.360 3.769 - 3.793: 98.7272% ( 1) 00:16:40.360 3.793 - 3.816: 98.7498% ( 3) 00:16:40.360 3.840 - 3.864: 98.7573% ( 1) 00:16:40.360 3.887 - 3.911: 98.7724% ( 2) 00:16:40.360 3.935 - 3.959: 98.7799% ( 1) 00:16:40.360 3.982 - 4.006: 9[2024-11-20 06:27:11.771100] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:40.361 8.7875% ( 1) 00:16:40.361 4.006 - 4.030: 98.7950% ( 1) 00:16:40.361 4.101 - 4.124: 98.8025% ( 1) 00:16:40.361 4.148 - 4.172: 98.8101% ( 1) 00:16:40.361 4.409 - 4.433: 98.8176% ( 1) 00:16:40.361 4.433 - 4.456: 98.8251% ( 1) 00:16:40.361 5.618 - 5.641: 98.8327% ( 1) 00:16:40.361 5.736 - 5.760: 98.8402% ( 1) 00:16:40.361 5.997 - 6.021: 98.8477% ( 1) 00:16:40.361 6.116 - 6.163: 98.8552% ( 1) 00:16:40.361 6.495 - 6.542: 98.8628% ( 1) 00:16:40.361 6.542 - 6.590: 98.8703% ( 1) 00:16:40.361 6.779 - 6.827: 98.8778% ( 1) 00:16:40.361 6.827 - 6.874: 98.8854% ( 1) 00:16:40.361 7.016 - 7.064: 98.8929% ( 1) 00:16:40.361 7.253 - 7.301: 98.9004% ( 1) 00:16:40.361 7.348 - 7.396: 98.9080% ( 1) 00:16:40.361 7.443 - 7.490: 98.9155% ( 1) 00:16:40.361 7.680 - 7.727: 98.9230% ( 1) 00:16:40.361 7.822 - 7.870: 98.9306% ( 1) 00:16:40.361 8.201 - 8.249: 98.9381% ( 1) 00:16:40.361 8.296 - 8.344: 98.9456% ( 1) 00:16:40.361 8.391 - 8.439: 98.9532% ( 1) 00:16:40.361 8.581 - 8.628: 98.9607% ( 1) 00:16:40.361 8.676 - 8.723: 98.9682% ( 1) 00:16:40.361 9.434 - 9.481: 98.9757% ( 1) 00:16:40.361 11.188 - 11.236: 98.9833% ( 1) 00:16:40.361 15.550 - 15.644: 98.9908% ( 1) 00:16:40.361 15.739 - 15.834: 99.0134% ( 3) 00:16:40.361 15.834 - 15.929: 99.0435% ( 4) 00:16:40.361 15.929 - 16.024: 99.0812% ( 5) 00:16:40.361 16.024 - 16.119: 99.1264% ( 6) 00:16:40.361 16.119 - 16.213: 99.1339% ( 1) 00:16:40.361 16.213 - 16.308: 99.1640% ( 4) 00:16:40.361 16.308 - 16.403: 99.2017% ( 5) 00:16:40.361 16.403 - 16.498: 99.2167% ( 2) 00:16:40.361 16.498 - 16.593: 99.2469% ( 4) 00:16:40.361 16.593 - 16.687: 99.2921% ( 6) 00:16:40.361 16.687 - 16.782: 99.3372% ( 6) 00:16:40.361 16.782 - 16.877: 99.3674% ( 4) 00:16:40.361 16.877 - 16.972: 99.3824% ( 2) 00:16:40.361 16.972 - 17.067: 99.3975% ( 2) 00:16:40.361 17.067 - 17.161: 99.4126% ( 2) 00:16:40.361 17.161 - 17.256: 99.4276% ( 2) 00:16:40.361 17.256 - 17.351: 99.4427% ( 2) 00:16:40.361 18.015 - 18.110: 99.4502% ( 1) 00:16:40.361 18.299 - 18.394: 99.4577% ( 1) 00:16:40.361 22.661 - 22.756: 99.4653% ( 1) 00:16:40.361 3980.705 - 4004.978: 99.8494% ( 51) 00:16:40.361 4004.978 - 4029.250: 99.9925% ( 19) 00:16:40.361 4102.068 - 4126.341: 100.0000% ( 1) 00:16:40.361 00:16:40.361 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:40.361 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:40.361 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:40.361 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:40.361 06:27:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:40.361 [ 00:16:40.361 { 00:16:40.361 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:40.361 "subtype": "Discovery", 00:16:40.361 "listen_addresses": [], 00:16:40.361 "allow_any_host": true, 00:16:40.361 "hosts": [] 00:16:40.361 }, 00:16:40.361 { 00:16:40.361 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:40.361 "subtype": "NVMe", 00:16:40.361 "listen_addresses": [ 00:16:40.361 { 00:16:40.361 "trtype": "VFIOUSER", 00:16:40.361 "adrfam": "IPv4", 00:16:40.361 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:40.361 "trsvcid": "0" 00:16:40.361 } 00:16:40.361 ], 00:16:40.361 "allow_any_host": true, 00:16:40.361 "hosts": [], 00:16:40.361 "serial_number": "SPDK1", 00:16:40.361 "model_number": "SPDK bdev Controller", 00:16:40.361 "max_namespaces": 32, 00:16:40.361 "min_cntlid": 1, 00:16:40.361 "max_cntlid": 65519, 00:16:40.361 "namespaces": [ 00:16:40.361 { 00:16:40.361 "nsid": 1, 00:16:40.361 "bdev_name": "Malloc1", 00:16:40.361 "name": "Malloc1", 00:16:40.361 "nguid": "B6775D0464114C1F9599B35DA7BB9BA6", 00:16:40.361 "uuid": "b6775d04-6411-4c1f-9599-b35da7bb9ba6" 00:16:40.361 } 00:16:40.361 ] 00:16:40.361 }, 00:16:40.361 { 00:16:40.361 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:40.361 "subtype": "NVMe", 00:16:40.361 "listen_addresses": [ 00:16:40.361 { 00:16:40.361 "trtype": "VFIOUSER", 00:16:40.361 "adrfam": "IPv4", 00:16:40.361 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:40.361 "trsvcid": "0" 00:16:40.361 } 00:16:40.361 ], 00:16:40.361 "allow_any_host": true, 00:16:40.361 "hosts": [], 00:16:40.361 "serial_number": "SPDK2", 00:16:40.361 "model_number": "SPDK bdev Controller", 00:16:40.361 "max_namespaces": 32, 00:16:40.361 "min_cntlid": 1, 00:16:40.361 "max_cntlid": 65519, 00:16:40.361 "namespaces": [ 00:16:40.361 { 00:16:40.361 "nsid": 1, 00:16:40.361 "bdev_name": "Malloc2", 00:16:40.361 "name": "Malloc2", 00:16:40.361 "nguid": "CD485E414A254C94A9E92F65916D74A8", 00:16:40.361 "uuid": "cd485e41-4a25-4c94-a9e9-2f65916d74a8" 00:16:40.361 } 00:16:40.361 ] 00:16:40.361 } 00:16:40.361 ] 00:16:40.361 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:40.361 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2069666 00:16:40.361 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:40.361 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:40.361 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:16:40.361 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:40.361 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:40.361 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:16:40.361 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:40.361 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:40.620 [2024-11-20 06:27:12.316808] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:40.620 Malloc3 00:16:40.620 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:40.877 [2024-11-20 06:27:12.697708] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:41.135 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:41.135 Asynchronous Event Request test 00:16:41.135 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:41.135 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:41.135 Registering asynchronous event callbacks... 00:16:41.135 Starting namespace attribute notice tests for all controllers... 00:16:41.135 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:41.135 aer_cb - Changed Namespace 00:16:41.135 Cleaning up... 00:16:41.135 [ 00:16:41.135 { 00:16:41.135 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:41.135 "subtype": "Discovery", 00:16:41.135 "listen_addresses": [], 00:16:41.135 "allow_any_host": true, 00:16:41.135 "hosts": [] 00:16:41.135 }, 00:16:41.135 { 00:16:41.135 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:41.135 "subtype": "NVMe", 00:16:41.135 "listen_addresses": [ 00:16:41.135 { 00:16:41.135 "trtype": "VFIOUSER", 00:16:41.135 "adrfam": "IPv4", 00:16:41.135 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:41.135 "trsvcid": "0" 00:16:41.135 } 00:16:41.135 ], 00:16:41.135 "allow_any_host": true, 00:16:41.135 "hosts": [], 00:16:41.135 "serial_number": "SPDK1", 00:16:41.135 "model_number": "SPDK bdev Controller", 00:16:41.135 "max_namespaces": 32, 00:16:41.135 "min_cntlid": 1, 00:16:41.135 "max_cntlid": 65519, 00:16:41.135 "namespaces": [ 00:16:41.135 { 00:16:41.135 "nsid": 1, 00:16:41.135 "bdev_name": "Malloc1", 00:16:41.135 "name": "Malloc1", 00:16:41.135 "nguid": "B6775D0464114C1F9599B35DA7BB9BA6", 00:16:41.135 "uuid": "b6775d04-6411-4c1f-9599-b35da7bb9ba6" 00:16:41.135 }, 00:16:41.135 { 00:16:41.135 "nsid": 2, 00:16:41.135 "bdev_name": "Malloc3", 00:16:41.135 "name": "Malloc3", 00:16:41.135 "nguid": "F34B5B8815CD47CBB8D3FD83B091C29A", 00:16:41.135 "uuid": "f34b5b88-15cd-47cb-b8d3-fd83b091c29a" 00:16:41.135 } 00:16:41.135 ] 00:16:41.135 }, 00:16:41.135 { 00:16:41.135 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:41.135 "subtype": "NVMe", 00:16:41.135 "listen_addresses": [ 00:16:41.135 { 00:16:41.135 "trtype": "VFIOUSER", 00:16:41.135 "adrfam": "IPv4", 00:16:41.135 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:41.135 "trsvcid": "0" 00:16:41.135 } 00:16:41.135 ], 00:16:41.135 "allow_any_host": true, 00:16:41.135 "hosts": [], 00:16:41.135 "serial_number": "SPDK2", 00:16:41.135 "model_number": "SPDK bdev Controller", 00:16:41.135 "max_namespaces": 32, 00:16:41.135 "min_cntlid": 1, 00:16:41.135 "max_cntlid": 65519, 00:16:41.135 "namespaces": [ 00:16:41.135 { 00:16:41.135 "nsid": 1, 00:16:41.135 "bdev_name": "Malloc2", 00:16:41.135 "name": "Malloc2", 00:16:41.135 "nguid": "CD485E414A254C94A9E92F65916D74A8", 00:16:41.135 "uuid": "cd485e41-4a25-4c94-a9e9-2f65916d74a8" 00:16:41.135 } 00:16:41.135 ] 00:16:41.135 } 00:16:41.135 ] 00:16:41.394 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2069666 00:16:41.394 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:41.394 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:41.394 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:41.394 06:27:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:41.394 [2024-11-20 06:27:12.997808] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:16:41.394 [2024-11-20 06:27:12.997845] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2069679 ] 00:16:41.394 [2024-11-20 06:27:13.046196] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:41.394 [2024-11-20 06:27:13.050551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:41.394 [2024-11-20 06:27:13.050602] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f47ce117000 00:16:41.394 [2024-11-20 06:27:13.051545] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:41.394 [2024-11-20 06:27:13.052546] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:41.394 [2024-11-20 06:27:13.053549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:41.394 [2024-11-20 06:27:13.054558] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:41.394 [2024-11-20 06:27:13.055564] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:41.394 [2024-11-20 06:27:13.056569] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:41.395 [2024-11-20 06:27:13.057579] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:41.395 [2024-11-20 06:27:13.058583] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:41.395 [2024-11-20 06:27:13.059592] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:41.395 [2024-11-20 06:27:13.059625] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f47ce10c000 00:16:41.395 [2024-11-20 06:27:13.060817] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:41.395 [2024-11-20 06:27:13.079636] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:41.395 [2024-11-20 06:27:13.079686] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:41.395 [2024-11-20 06:27:13.081790] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:41.395 [2024-11-20 06:27:13.081842] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:41.395 [2024-11-20 06:27:13.081925] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:41.395 [2024-11-20 06:27:13.081948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:41.395 [2024-11-20 06:27:13.081958] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:41.395 [2024-11-20 06:27:13.082800] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:41.395 [2024-11-20 06:27:13.082832] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:41.395 [2024-11-20 06:27:13.082844] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:41.395 [2024-11-20 06:27:13.083804] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:41.395 [2024-11-20 06:27:13.083826] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:41.395 [2024-11-20 06:27:13.083839] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:41.395 [2024-11-20 06:27:13.084813] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:41.395 [2024-11-20 06:27:13.084838] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:41.395 [2024-11-20 06:27:13.085822] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:41.395 [2024-11-20 06:27:13.085841] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:41.395 [2024-11-20 06:27:13.085850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:41.395 [2024-11-20 06:27:13.085861] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:41.395 [2024-11-20 06:27:13.085970] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:41.395 [2024-11-20 06:27:13.085978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:41.395 [2024-11-20 06:27:13.085986] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:41.395 [2024-11-20 06:27:13.086830] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:41.395 [2024-11-20 06:27:13.087843] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:41.395 [2024-11-20 06:27:13.088852] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:41.395 [2024-11-20 06:27:13.089843] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:41.395 [2024-11-20 06:27:13.089933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:41.395 [2024-11-20 06:27:13.090863] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:41.395 [2024-11-20 06:27:13.090883] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:41.395 [2024-11-20 06:27:13.090892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.090922] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:41.395 [2024-11-20 06:27:13.090940] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.090959] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:41.395 [2024-11-20 06:27:13.090969] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:41.395 [2024-11-20 06:27:13.090975] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:41.395 [2024-11-20 06:27:13.090992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:41.395 [2024-11-20 06:27:13.097325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:41.395 [2024-11-20 06:27:13.097349] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:41.395 [2024-11-20 06:27:13.097360] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:41.395 [2024-11-20 06:27:13.097373] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:41.395 [2024-11-20 06:27:13.097382] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:41.395 [2024-11-20 06:27:13.097394] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:41.395 [2024-11-20 06:27:13.097403] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:41.395 [2024-11-20 06:27:13.097411] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.097426] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.097443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:41.395 [2024-11-20 06:27:13.105317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:41.395 [2024-11-20 06:27:13.105342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.395 [2024-11-20 06:27:13.105356] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.395 [2024-11-20 06:27:13.105368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.395 [2024-11-20 06:27:13.105380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:41.395 [2024-11-20 06:27:13.105389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.105401] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.105414] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:41.395 [2024-11-20 06:27:13.113326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:41.395 [2024-11-20 06:27:13.113350] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:41.395 [2024-11-20 06:27:13.113360] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.113372] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.113382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.113395] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:41.395 [2024-11-20 06:27:13.121315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:41.395 [2024-11-20 06:27:13.121391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.121409] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.121426] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:41.395 [2024-11-20 06:27:13.121436] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:41.395 [2024-11-20 06:27:13.121442] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:41.395 [2024-11-20 06:27:13.121451] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:41.395 [2024-11-20 06:27:13.129317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:41.395 [2024-11-20 06:27:13.129340] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:41.395 [2024-11-20 06:27:13.129363] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.129378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.129392] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:41.395 [2024-11-20 06:27:13.129400] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:41.395 [2024-11-20 06:27:13.129406] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:41.395 [2024-11-20 06:27:13.129416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:41.395 [2024-11-20 06:27:13.137328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:41.395 [2024-11-20 06:27:13.137358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.137375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:41.395 [2024-11-20 06:27:13.137388] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:41.395 [2024-11-20 06:27:13.137396] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:41.395 [2024-11-20 06:27:13.137403] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:41.395 [2024-11-20 06:27:13.137413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:41.395 [2024-11-20 06:27:13.145318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:41.395 [2024-11-20 06:27:13.145340] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:41.396 [2024-11-20 06:27:13.145353] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:41.396 [2024-11-20 06:27:13.145368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:41.396 [2024-11-20 06:27:13.145378] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:41.396 [2024-11-20 06:27:13.145387] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:41.396 [2024-11-20 06:27:13.145395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:41.396 [2024-11-20 06:27:13.145408] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:41.396 [2024-11-20 06:27:13.145416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:41.396 [2024-11-20 06:27:13.145425] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:41.396 [2024-11-20 06:27:13.145449] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:41.396 [2024-11-20 06:27:13.153315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:41.396 [2024-11-20 06:27:13.153342] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:41.396 [2024-11-20 06:27:13.161314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:41.396 [2024-11-20 06:27:13.161339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:41.396 [2024-11-20 06:27:13.169326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:41.396 [2024-11-20 06:27:13.169352] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:41.396 [2024-11-20 06:27:13.177331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:41.396 [2024-11-20 06:27:13.177363] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:41.396 [2024-11-20 06:27:13.177374] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:41.396 [2024-11-20 06:27:13.177381] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:41.396 [2024-11-20 06:27:13.177386] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:41.396 [2024-11-20 06:27:13.177392] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:41.396 [2024-11-20 06:27:13.177402] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:41.396 [2024-11-20 06:27:13.177414] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:41.396 [2024-11-20 06:27:13.177422] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:41.396 [2024-11-20 06:27:13.177428] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:41.396 [2024-11-20 06:27:13.177437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:41.396 [2024-11-20 06:27:13.177448] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:41.396 [2024-11-20 06:27:13.177456] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:41.396 [2024-11-20 06:27:13.177462] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:41.396 [2024-11-20 06:27:13.177470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:41.396 [2024-11-20 06:27:13.177482] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:41.396 [2024-11-20 06:27:13.177490] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:41.396 [2024-11-20 06:27:13.177496] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:41.396 [2024-11-20 06:27:13.177509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:41.396 [2024-11-20 06:27:13.185329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:41.396 [2024-11-20 06:27:13.185357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:41.396 [2024-11-20 06:27:13.185376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:41.396 [2024-11-20 06:27:13.185388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:41.396 ===================================================== 00:16:41.396 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:41.396 ===================================================== 00:16:41.396 Controller Capabilities/Features 00:16:41.396 ================================ 00:16:41.396 Vendor ID: 4e58 00:16:41.396 Subsystem Vendor ID: 4e58 00:16:41.396 Serial Number: SPDK2 00:16:41.396 Model Number: SPDK bdev Controller 00:16:41.396 Firmware Version: 25.01 00:16:41.396 Recommended Arb Burst: 6 00:16:41.396 IEEE OUI Identifier: 8d 6b 50 00:16:41.396 Multi-path I/O 00:16:41.396 May have multiple subsystem ports: Yes 00:16:41.396 May have multiple controllers: Yes 00:16:41.396 Associated with SR-IOV VF: No 00:16:41.396 Max Data Transfer Size: 131072 00:16:41.396 Max Number of Namespaces: 32 00:16:41.396 Max Number of I/O Queues: 127 00:16:41.396 NVMe Specification Version (VS): 1.3 00:16:41.396 NVMe Specification Version (Identify): 1.3 00:16:41.396 Maximum Queue Entries: 256 00:16:41.396 Contiguous Queues Required: Yes 00:16:41.396 Arbitration Mechanisms Supported 00:16:41.396 Weighted Round Robin: Not Supported 00:16:41.396 Vendor Specific: Not Supported 00:16:41.396 Reset Timeout: 15000 ms 00:16:41.396 Doorbell Stride: 4 bytes 00:16:41.396 NVM Subsystem Reset: Not Supported 00:16:41.396 Command Sets Supported 00:16:41.396 NVM Command Set: Supported 00:16:41.396 Boot Partition: Not Supported 00:16:41.396 Memory Page Size Minimum: 4096 bytes 00:16:41.396 Memory Page Size Maximum: 4096 bytes 00:16:41.396 Persistent Memory Region: Not Supported 00:16:41.396 Optional Asynchronous Events Supported 00:16:41.396 Namespace Attribute Notices: Supported 00:16:41.396 Firmware Activation Notices: Not Supported 00:16:41.396 ANA Change Notices: Not Supported 00:16:41.396 PLE Aggregate Log Change Notices: Not Supported 00:16:41.396 LBA Status Info Alert Notices: Not Supported 00:16:41.396 EGE Aggregate Log Change Notices: Not Supported 00:16:41.396 Normal NVM Subsystem Shutdown event: Not Supported 00:16:41.396 Zone Descriptor Change Notices: Not Supported 00:16:41.396 Discovery Log Change Notices: Not Supported 00:16:41.396 Controller Attributes 00:16:41.396 128-bit Host Identifier: Supported 00:16:41.396 Non-Operational Permissive Mode: Not Supported 00:16:41.396 NVM Sets: Not Supported 00:16:41.396 Read Recovery Levels: Not Supported 00:16:41.396 Endurance Groups: Not Supported 00:16:41.396 Predictable Latency Mode: Not Supported 00:16:41.396 Traffic Based Keep ALive: Not Supported 00:16:41.396 Namespace Granularity: Not Supported 00:16:41.396 SQ Associations: Not Supported 00:16:41.396 UUID List: Not Supported 00:16:41.396 Multi-Domain Subsystem: Not Supported 00:16:41.396 Fixed Capacity Management: Not Supported 00:16:41.396 Variable Capacity Management: Not Supported 00:16:41.396 Delete Endurance Group: Not Supported 00:16:41.396 Delete NVM Set: Not Supported 00:16:41.396 Extended LBA Formats Supported: Not Supported 00:16:41.396 Flexible Data Placement Supported: Not Supported 00:16:41.396 00:16:41.396 Controller Memory Buffer Support 00:16:41.396 ================================ 00:16:41.396 Supported: No 00:16:41.396 00:16:41.396 Persistent Memory Region Support 00:16:41.396 ================================ 00:16:41.396 Supported: No 00:16:41.396 00:16:41.396 Admin Command Set Attributes 00:16:41.396 ============================ 00:16:41.396 Security Send/Receive: Not Supported 00:16:41.396 Format NVM: Not Supported 00:16:41.396 Firmware Activate/Download: Not Supported 00:16:41.396 Namespace Management: Not Supported 00:16:41.396 Device Self-Test: Not Supported 00:16:41.396 Directives: Not Supported 00:16:41.396 NVMe-MI: Not Supported 00:16:41.396 Virtualization Management: Not Supported 00:16:41.396 Doorbell Buffer Config: Not Supported 00:16:41.396 Get LBA Status Capability: Not Supported 00:16:41.396 Command & Feature Lockdown Capability: Not Supported 00:16:41.396 Abort Command Limit: 4 00:16:41.396 Async Event Request Limit: 4 00:16:41.396 Number of Firmware Slots: N/A 00:16:41.396 Firmware Slot 1 Read-Only: N/A 00:16:41.396 Firmware Activation Without Reset: N/A 00:16:41.396 Multiple Update Detection Support: N/A 00:16:41.396 Firmware Update Granularity: No Information Provided 00:16:41.396 Per-Namespace SMART Log: No 00:16:41.396 Asymmetric Namespace Access Log Page: Not Supported 00:16:41.396 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:41.396 Command Effects Log Page: Supported 00:16:41.396 Get Log Page Extended Data: Supported 00:16:41.396 Telemetry Log Pages: Not Supported 00:16:41.396 Persistent Event Log Pages: Not Supported 00:16:41.396 Supported Log Pages Log Page: May Support 00:16:41.396 Commands Supported & Effects Log Page: Not Supported 00:16:41.396 Feature Identifiers & Effects Log Page:May Support 00:16:41.396 NVMe-MI Commands & Effects Log Page: May Support 00:16:41.396 Data Area 4 for Telemetry Log: Not Supported 00:16:41.396 Error Log Page Entries Supported: 128 00:16:41.396 Keep Alive: Supported 00:16:41.396 Keep Alive Granularity: 10000 ms 00:16:41.396 00:16:41.396 NVM Command Set Attributes 00:16:41.396 ========================== 00:16:41.396 Submission Queue Entry Size 00:16:41.396 Max: 64 00:16:41.396 Min: 64 00:16:41.396 Completion Queue Entry Size 00:16:41.396 Max: 16 00:16:41.396 Min: 16 00:16:41.396 Number of Namespaces: 32 00:16:41.396 Compare Command: Supported 00:16:41.396 Write Uncorrectable Command: Not Supported 00:16:41.396 Dataset Management Command: Supported 00:16:41.396 Write Zeroes Command: Supported 00:16:41.396 Set Features Save Field: Not Supported 00:16:41.396 Reservations: Not Supported 00:16:41.396 Timestamp: Not Supported 00:16:41.396 Copy: Supported 00:16:41.396 Volatile Write Cache: Present 00:16:41.396 Atomic Write Unit (Normal): 1 00:16:41.396 Atomic Write Unit (PFail): 1 00:16:41.396 Atomic Compare & Write Unit: 1 00:16:41.396 Fused Compare & Write: Supported 00:16:41.396 Scatter-Gather List 00:16:41.396 SGL Command Set: Supported (Dword aligned) 00:16:41.396 SGL Keyed: Not Supported 00:16:41.396 SGL Bit Bucket Descriptor: Not Supported 00:16:41.396 SGL Metadata Pointer: Not Supported 00:16:41.396 Oversized SGL: Not Supported 00:16:41.396 SGL Metadata Address: Not Supported 00:16:41.396 SGL Offset: Not Supported 00:16:41.396 Transport SGL Data Block: Not Supported 00:16:41.397 Replay Protected Memory Block: Not Supported 00:16:41.397 00:16:41.397 Firmware Slot Information 00:16:41.397 ========================= 00:16:41.397 Active slot: 1 00:16:41.397 Slot 1 Firmware Revision: 25.01 00:16:41.397 00:16:41.397 00:16:41.397 Commands Supported and Effects 00:16:41.397 ============================== 00:16:41.397 Admin Commands 00:16:41.397 -------------- 00:16:41.397 Get Log Page (02h): Supported 00:16:41.397 Identify (06h): Supported 00:16:41.397 Abort (08h): Supported 00:16:41.397 Set Features (09h): Supported 00:16:41.397 Get Features (0Ah): Supported 00:16:41.397 Asynchronous Event Request (0Ch): Supported 00:16:41.397 Keep Alive (18h): Supported 00:16:41.397 I/O Commands 00:16:41.397 ------------ 00:16:41.397 Flush (00h): Supported LBA-Change 00:16:41.397 Write (01h): Supported LBA-Change 00:16:41.397 Read (02h): Supported 00:16:41.397 Compare (05h): Supported 00:16:41.397 Write Zeroes (08h): Supported LBA-Change 00:16:41.397 Dataset Management (09h): Supported LBA-Change 00:16:41.397 Copy (19h): Supported LBA-Change 00:16:41.397 00:16:41.397 Error Log 00:16:41.397 ========= 00:16:41.397 00:16:41.397 Arbitration 00:16:41.397 =========== 00:16:41.397 Arbitration Burst: 1 00:16:41.397 00:16:41.397 Power Management 00:16:41.397 ================ 00:16:41.397 Number of Power States: 1 00:16:41.397 Current Power State: Power State #0 00:16:41.397 Power State #0: 00:16:41.397 Max Power: 0.00 W 00:16:41.397 Non-Operational State: Operational 00:16:41.397 Entry Latency: Not Reported 00:16:41.397 Exit Latency: Not Reported 00:16:41.397 Relative Read Throughput: 0 00:16:41.397 Relative Read Latency: 0 00:16:41.397 Relative Write Throughput: 0 00:16:41.397 Relative Write Latency: 0 00:16:41.397 Idle Power: Not Reported 00:16:41.397 Active Power: Not Reported 00:16:41.397 Non-Operational Permissive Mode: Not Supported 00:16:41.397 00:16:41.397 Health Information 00:16:41.397 ================== 00:16:41.397 Critical Warnings: 00:16:41.397 Available Spare Space: OK 00:16:41.397 Temperature: OK 00:16:41.397 Device Reliability: OK 00:16:41.397 Read Only: No 00:16:41.397 Volatile Memory Backup: OK 00:16:41.397 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:41.397 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:41.397 Available Spare: 0% 00:16:41.397 Available Sp[2024-11-20 06:27:13.185508] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:41.397 [2024-11-20 06:27:13.193321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:41.397 [2024-11-20 06:27:13.193370] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:41.397 [2024-11-20 06:27:13.193388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.397 [2024-11-20 06:27:13.193399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.397 [2024-11-20 06:27:13.193408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.397 [2024-11-20 06:27:13.193417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:41.397 [2024-11-20 06:27:13.193484] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:41.397 [2024-11-20 06:27:13.193505] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:41.397 [2024-11-20 06:27:13.194488] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:41.397 [2024-11-20 06:27:13.194576] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:41.397 [2024-11-20 06:27:13.194592] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:41.397 [2024-11-20 06:27:13.195492] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:41.397 [2024-11-20 06:27:13.195516] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:41.397 [2024-11-20 06:27:13.195568] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:41.397 [2024-11-20 06:27:13.196770] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:41.654 are Threshold: 0% 00:16:41.655 Life Percentage Used: 0% 00:16:41.655 Data Units Read: 0 00:16:41.655 Data Units Written: 0 00:16:41.655 Host Read Commands: 0 00:16:41.655 Host Write Commands: 0 00:16:41.655 Controller Busy Time: 0 minutes 00:16:41.655 Power Cycles: 0 00:16:41.655 Power On Hours: 0 hours 00:16:41.655 Unsafe Shutdowns: 0 00:16:41.655 Unrecoverable Media Errors: 0 00:16:41.655 Lifetime Error Log Entries: 0 00:16:41.655 Warning Temperature Time: 0 minutes 00:16:41.655 Critical Temperature Time: 0 minutes 00:16:41.655 00:16:41.655 Number of Queues 00:16:41.655 ================ 00:16:41.655 Number of I/O Submission Queues: 127 00:16:41.655 Number of I/O Completion Queues: 127 00:16:41.655 00:16:41.655 Active Namespaces 00:16:41.655 ================= 00:16:41.655 Namespace ID:1 00:16:41.655 Error Recovery Timeout: Unlimited 00:16:41.655 Command Set Identifier: NVM (00h) 00:16:41.655 Deallocate: Supported 00:16:41.655 Deallocated/Unwritten Error: Not Supported 00:16:41.655 Deallocated Read Value: Unknown 00:16:41.655 Deallocate in Write Zeroes: Not Supported 00:16:41.655 Deallocated Guard Field: 0xFFFF 00:16:41.655 Flush: Supported 00:16:41.655 Reservation: Supported 00:16:41.655 Namespace Sharing Capabilities: Multiple Controllers 00:16:41.655 Size (in LBAs): 131072 (0GiB) 00:16:41.655 Capacity (in LBAs): 131072 (0GiB) 00:16:41.655 Utilization (in LBAs): 131072 (0GiB) 00:16:41.655 NGUID: CD485E414A254C94A9E92F65916D74A8 00:16:41.655 UUID: cd485e41-4a25-4c94-a9e9-2f65916d74a8 00:16:41.655 Thin Provisioning: Not Supported 00:16:41.655 Per-NS Atomic Units: Yes 00:16:41.655 Atomic Boundary Size (Normal): 0 00:16:41.655 Atomic Boundary Size (PFail): 0 00:16:41.655 Atomic Boundary Offset: 0 00:16:41.655 Maximum Single Source Range Length: 65535 00:16:41.655 Maximum Copy Length: 65535 00:16:41.655 Maximum Source Range Count: 1 00:16:41.655 NGUID/EUI64 Never Reused: No 00:16:41.655 Namespace Write Protected: No 00:16:41.655 Number of LBA Formats: 1 00:16:41.655 Current LBA Format: LBA Format #00 00:16:41.655 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:41.655 00:16:41.655 06:27:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:41.655 [2024-11-20 06:27:13.450212] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:46.921 Initializing NVMe Controllers 00:16:46.921 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:46.921 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:46.921 Initialization complete. Launching workers. 00:16:46.921 ======================================================== 00:16:46.921 Latency(us) 00:16:46.921 Device Information : IOPS MiB/s Average min max 00:16:46.921 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 34913.27 136.38 3665.49 1164.49 7335.40 00:16:46.921 ======================================================== 00:16:46.921 Total : 34913.27 136.38 3665.49 1164.49 7335.40 00:16:46.921 00:16:46.921 [2024-11-20 06:27:18.556672] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:46.921 06:27:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:47.178 [2024-11-20 06:27:18.806398] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:52.440 Initializing NVMe Controllers 00:16:52.440 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:52.440 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:16:52.440 Initialization complete. Launching workers. 00:16:52.440 ======================================================== 00:16:52.440 Latency(us) 00:16:52.440 Device Information : IOPS MiB/s Average min max 00:16:52.440 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 30892.60 120.67 4145.43 1229.36 10165.21 00:16:52.440 ======================================================== 00:16:52.440 Total : 30892.60 120.67 4145.43 1229.36 10165.21 00:16:52.440 00:16:52.440 [2024-11-20 06:27:23.828998] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:52.441 06:27:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:52.441 [2024-11-20 06:27:24.059950] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:57.703 [2024-11-20 06:27:29.199704] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:57.703 Initializing NVMe Controllers 00:16:57.703 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:57.703 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:57.703 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:16:57.703 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:16:57.703 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:16:57.703 Initialization complete. Launching workers. 00:16:57.703 Starting thread on core 2 00:16:57.703 Starting thread on core 3 00:16:57.703 Starting thread on core 1 00:16:57.703 06:27:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:16:57.703 [2024-11-20 06:27:29.527831] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:00.984 [2024-11-20 06:27:32.606235] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:00.984 Initializing NVMe Controllers 00:17:00.984 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:00.984 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:00.984 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:00.984 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:00.984 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:00.984 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:00.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:00.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:00.984 Initialization complete. Launching workers. 00:17:00.984 Starting thread on core 1 with urgent priority queue 00:17:00.984 Starting thread on core 2 with urgent priority queue 00:17:00.984 Starting thread on core 3 with urgent priority queue 00:17:00.984 Starting thread on core 0 with urgent priority queue 00:17:00.984 SPDK bdev Controller (SPDK2 ) core 0: 4300.33 IO/s 23.25 secs/100000 ios 00:17:00.984 SPDK bdev Controller (SPDK2 ) core 1: 5190.67 IO/s 19.27 secs/100000 ios 00:17:00.984 SPDK bdev Controller (SPDK2 ) core 2: 5122.00 IO/s 19.52 secs/100000 ios 00:17:00.984 SPDK bdev Controller (SPDK2 ) core 3: 5187.33 IO/s 19.28 secs/100000 ios 00:17:00.985 ======================================================== 00:17:00.985 00:17:00.985 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:01.242 [2024-11-20 06:27:32.929783] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:01.242 Initializing NVMe Controllers 00:17:01.242 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:01.242 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:01.242 Namespace ID: 1 size: 0GB 00:17:01.242 Initialization complete. 00:17:01.242 INFO: using host memory buffer for IO 00:17:01.242 Hello world! 00:17:01.242 [2024-11-20 06:27:32.938845] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:01.242 06:27:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:01.501 [2024-11-20 06:27:33.245056] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:02.877 Initializing NVMe Controllers 00:17:02.877 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:02.877 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:02.877 Initialization complete. Launching workers. 00:17:02.877 submit (in ns) avg, min, max = 8387.2, 3525.6, 4016943.3 00:17:02.877 complete (in ns) avg, min, max = 27033.5, 2063.3, 5012700.0 00:17:02.877 00:17:02.877 Submit histogram 00:17:02.877 ================ 00:17:02.877 Range in us Cumulative Count 00:17:02.877 3.508 - 3.532: 0.0076% ( 1) 00:17:02.877 3.556 - 3.579: 0.5893% ( 77) 00:17:02.877 3.579 - 3.603: 4.2384% ( 483) 00:17:02.877 3.603 - 3.627: 11.3025% ( 935) 00:17:02.877 3.627 - 3.650: 21.1469% ( 1303) 00:17:02.877 3.650 - 3.674: 32.1925% ( 1462) 00:17:02.877 3.674 - 3.698: 41.1605% ( 1187) 00:17:02.877 3.698 - 3.721: 49.0329% ( 1042) 00:17:02.877 3.721 - 3.745: 54.7522% ( 757) 00:17:02.877 3.745 - 3.769: 60.3732% ( 744) 00:17:02.877 3.769 - 3.793: 64.8534% ( 593) 00:17:02.877 3.793 - 3.816: 68.7821% ( 520) 00:17:02.877 3.816 - 3.840: 71.9326% ( 417) 00:17:02.877 3.840 - 3.864: 75.9142% ( 527) 00:17:02.877 3.864 - 3.887: 79.7824% ( 512) 00:17:02.877 3.887 - 3.911: 83.2578% ( 460) 00:17:02.877 3.911 - 3.935: 86.0759% ( 373) 00:17:02.877 3.935 - 3.959: 87.9798% ( 252) 00:17:02.877 3.959 - 3.982: 89.7779% ( 238) 00:17:02.877 3.982 - 4.006: 91.4853% ( 226) 00:17:02.877 4.006 - 4.030: 92.8679% ( 183) 00:17:02.878 4.030 - 4.053: 94.1599% ( 171) 00:17:02.878 4.053 - 4.077: 95.0060% ( 112) 00:17:02.878 4.077 - 4.101: 95.7162% ( 94) 00:17:02.878 4.101 - 4.124: 96.0864% ( 49) 00:17:02.878 4.124 - 4.148: 96.3660% ( 37) 00:17:02.878 4.148 - 4.172: 96.6153% ( 33) 00:17:02.878 4.172 - 4.196: 96.7362% ( 16) 00:17:02.878 4.196 - 4.219: 96.8495% ( 15) 00:17:02.878 4.219 - 4.243: 96.9855% ( 18) 00:17:02.878 4.243 - 4.267: 97.0913% ( 14) 00:17:02.878 4.267 - 4.290: 97.1819% ( 12) 00:17:02.878 4.290 - 4.314: 97.2424% ( 8) 00:17:02.878 4.314 - 4.338: 97.3255% ( 11) 00:17:02.878 4.338 - 4.361: 97.3859% ( 8) 00:17:02.878 4.361 - 4.385: 97.4161% ( 4) 00:17:02.878 4.385 - 4.409: 97.4464% ( 4) 00:17:02.878 4.409 - 4.433: 97.4539% ( 1) 00:17:02.878 4.433 - 4.456: 97.4841% ( 4) 00:17:02.878 4.456 - 4.480: 97.4917% ( 1) 00:17:02.878 4.480 - 4.504: 97.4992% ( 1) 00:17:02.878 4.504 - 4.527: 97.5068% ( 1) 00:17:02.878 4.527 - 4.551: 97.5144% ( 1) 00:17:02.878 4.575 - 4.599: 97.5219% ( 1) 00:17:02.878 4.599 - 4.622: 97.5370% ( 2) 00:17:02.878 4.622 - 4.646: 97.5446% ( 1) 00:17:02.878 4.670 - 4.693: 97.5672% ( 3) 00:17:02.878 4.693 - 4.717: 97.5748% ( 1) 00:17:02.878 4.717 - 4.741: 97.5824% ( 1) 00:17:02.878 4.741 - 4.764: 97.5975% ( 2) 00:17:02.878 4.764 - 4.788: 97.6277% ( 4) 00:17:02.878 4.788 - 4.812: 97.6352% ( 1) 00:17:02.878 4.812 - 4.836: 97.6806% ( 6) 00:17:02.878 4.836 - 4.859: 97.7259% ( 6) 00:17:02.878 4.859 - 4.883: 97.7863% ( 8) 00:17:02.878 4.883 - 4.907: 97.8468% ( 8) 00:17:02.878 4.907 - 4.930: 97.8770% ( 4) 00:17:02.878 4.930 - 4.954: 97.9072% ( 4) 00:17:02.878 4.954 - 4.978: 97.9299% ( 3) 00:17:02.878 4.978 - 5.001: 97.9979% ( 9) 00:17:02.878 5.001 - 5.025: 98.0659% ( 9) 00:17:02.878 5.025 - 5.049: 98.1188% ( 7) 00:17:02.878 5.049 - 5.073: 98.1490% ( 4) 00:17:02.878 5.073 - 5.096: 98.1565% ( 1) 00:17:02.878 5.096 - 5.120: 98.1792% ( 3) 00:17:02.878 5.120 - 5.144: 98.2321% ( 7) 00:17:02.878 5.144 - 5.167: 98.2548% ( 3) 00:17:02.878 5.167 - 5.191: 98.2925% ( 5) 00:17:02.878 5.215 - 5.239: 98.3228% ( 4) 00:17:02.878 5.239 - 5.262: 98.3530% ( 4) 00:17:02.878 5.262 - 5.286: 98.3756% ( 3) 00:17:02.878 5.310 - 5.333: 98.3832% ( 1) 00:17:02.878 5.333 - 5.357: 98.4059% ( 3) 00:17:02.878 5.357 - 5.381: 98.4134% ( 1) 00:17:02.878 5.381 - 5.404: 98.4210% ( 1) 00:17:02.878 5.428 - 5.452: 98.4285% ( 1) 00:17:02.878 5.499 - 5.523: 98.4361% ( 1) 00:17:02.878 5.547 - 5.570: 98.4512% ( 2) 00:17:02.878 5.689 - 5.713: 98.4587% ( 1) 00:17:02.878 5.784 - 5.807: 98.4739% ( 2) 00:17:02.878 5.902 - 5.926: 98.4814% ( 1) 00:17:02.878 5.973 - 5.997: 98.4965% ( 2) 00:17:02.878 6.044 - 6.068: 98.5041% ( 1) 00:17:02.878 6.210 - 6.258: 98.5192% ( 2) 00:17:02.878 6.305 - 6.353: 98.5267% ( 1) 00:17:02.878 6.684 - 6.732: 98.5343% ( 1) 00:17:02.878 6.827 - 6.874: 98.5419% ( 1) 00:17:02.878 6.969 - 7.016: 98.5494% ( 1) 00:17:02.878 7.016 - 7.064: 98.5570% ( 1) 00:17:02.878 7.111 - 7.159: 98.5645% ( 1) 00:17:02.878 7.206 - 7.253: 98.5721% ( 1) 00:17:02.878 7.301 - 7.348: 98.5796% ( 1) 00:17:02.878 7.396 - 7.443: 98.5947% ( 2) 00:17:02.878 7.490 - 7.538: 98.6099% ( 2) 00:17:02.878 7.633 - 7.680: 98.6174% ( 1) 00:17:02.878 7.727 - 7.775: 98.6250% ( 1) 00:17:02.878 7.775 - 7.822: 98.6325% ( 1) 00:17:02.878 8.012 - 8.059: 98.6476% ( 2) 00:17:02.878 8.201 - 8.249: 98.6703% ( 3) 00:17:02.878 8.249 - 8.296: 98.6778% ( 1) 00:17:02.878 8.486 - 8.533: 98.6854% ( 1) 00:17:02.878 8.581 - 8.628: 98.6930% ( 1) 00:17:02.878 8.628 - 8.676: 98.7005% ( 1) 00:17:02.878 8.676 - 8.723: 98.7081% ( 1) 00:17:02.878 8.723 - 8.770: 98.7232% ( 2) 00:17:02.878 8.770 - 8.818: 98.7383% ( 2) 00:17:02.878 8.818 - 8.865: 98.7458% ( 1) 00:17:02.878 9.007 - 9.055: 98.7610% ( 2) 00:17:02.878 9.055 - 9.102: 98.7836% ( 3) 00:17:02.878 9.197 - 9.244: 98.7912% ( 1) 00:17:02.878 9.244 - 9.292: 98.7987% ( 1) 00:17:02.878 9.292 - 9.339: 98.8063% ( 1) 00:17:02.878 9.387 - 9.434: 98.8214% ( 2) 00:17:02.878 9.529 - 9.576: 98.8365% ( 2) 00:17:02.878 9.576 - 9.624: 98.8441% ( 1) 00:17:02.878 9.671 - 9.719: 98.8516% ( 1) 00:17:02.878 9.719 - 9.766: 98.8592% ( 1) 00:17:02.878 9.813 - 9.861: 98.8667% ( 1) 00:17:02.878 9.908 - 9.956: 98.8743% ( 1) 00:17:02.878 10.098 - 10.145: 98.8818% ( 1) 00:17:02.878 10.240 - 10.287: 98.8894% ( 1) 00:17:02.878 10.335 - 10.382: 98.8969% ( 1) 00:17:02.878 10.430 - 10.477: 98.9045% ( 1) 00:17:02.878 10.524 - 10.572: 98.9121% ( 1) 00:17:02.878 10.951 - 10.999: 98.9272% ( 2) 00:17:02.878 10.999 - 11.046: 98.9498% ( 3) 00:17:02.878 11.330 - 11.378: 98.9574% ( 1) 00:17:02.878 11.425 - 11.473: 98.9649% ( 1) 00:17:02.878 11.615 - 11.662: 98.9725% ( 1) 00:17:02.878 11.710 - 11.757: 98.9801% ( 1) 00:17:02.878 11.994 - 12.041: 98.9876% ( 1) 00:17:02.878 12.136 - 12.231: 98.9952% ( 1) 00:17:02.878 12.516 - 12.610: 99.0027% ( 1) 00:17:02.878 12.610 - 12.705: 99.0178% ( 2) 00:17:02.878 12.705 - 12.800: 99.0254% ( 1) 00:17:02.878 12.800 - 12.895: 99.0329% ( 1) 00:17:02.878 13.084 - 13.179: 99.0405% ( 1) 00:17:02.878 13.179 - 13.274: 99.0481% ( 1) 00:17:02.878 13.274 - 13.369: 99.0632% ( 2) 00:17:02.878 13.369 - 13.464: 99.0707% ( 1) 00:17:02.878 13.653 - 13.748: 99.0783% ( 1) 00:17:02.878 14.127 - 14.222: 99.0858% ( 1) 00:17:02.878 14.412 - 14.507: 99.0934% ( 1) 00:17:02.878 14.507 - 14.601: 99.1009% ( 1) 00:17:02.878 14.696 - 14.791: 99.1085% ( 1) 00:17:02.878 14.791 - 14.886: 99.1160% ( 1) 00:17:02.878 15.076 - 15.170: 99.1236% ( 1) 00:17:02.878 16.972 - 17.067: 99.1312% ( 1) 00:17:02.878 17.067 - 17.161: 99.1387% ( 1) 00:17:02.878 17.161 - 17.256: 99.1538% ( 2) 00:17:02.878 17.256 - 17.351: 99.1916% ( 5) 00:17:02.878 17.351 - 17.446: 99.2143% ( 3) 00:17:02.878 17.446 - 17.541: 99.2294% ( 2) 00:17:02.878 17.541 - 17.636: 99.2445% ( 2) 00:17:02.878 17.636 - 17.730: 99.2520% ( 1) 00:17:02.878 17.730 - 17.825: 99.2974% ( 6) 00:17:02.878 17.825 - 17.920: 99.3956% ( 13) 00:17:02.878 17.920 - 18.015: 99.4636% ( 9) 00:17:02.878 18.015 - 18.110: 99.4938% ( 4) 00:17:02.878 18.110 - 18.204: 99.5391% ( 6) 00:17:02.878 18.204 - 18.299: 99.5694% ( 4) 00:17:02.878 18.299 - 18.394: 99.6222% ( 7) 00:17:02.878 18.394 - 18.489: 99.6978% ( 10) 00:17:02.878 18.489 - 18.584: 99.7356% ( 5) 00:17:02.878 18.584 - 18.679: 99.7885% ( 7) 00:17:02.878 18.679 - 18.773: 99.8111% ( 3) 00:17:02.878 18.773 - 18.868: 99.8187% ( 1) 00:17:02.878 18.868 - 18.963: 99.8262% ( 1) 00:17:02.878 18.963 - 19.058: 99.8413% ( 2) 00:17:02.878 19.342 - 19.437: 99.8489% ( 1) 00:17:02.878 19.437 - 19.532: 99.8565% ( 1) 00:17:02.878 19.911 - 20.006: 99.8640% ( 1) 00:17:02.878 23.704 - 23.799: 99.8716% ( 1) 00:17:02.878 23.893 - 23.988: 99.8791% ( 1) 00:17:02.878 25.221 - 25.410: 99.8867% ( 1) 00:17:02.878 3034.074 - 3046.210: 99.8942% ( 1) 00:17:02.878 3980.705 - 4004.978: 99.9622% ( 9) 00:17:02.878 4004.978 - 4029.250: 100.0000% ( 5) 00:17:02.878 00:17:02.878 Complete histogram 00:17:02.878 ================== 00:17:02.878 Range in us Cumulative Count 00:17:02.878 2.062 - 2.074: 9.6630% ( 1279) 00:17:02.878 2.074 - 2.086: 46.7664% ( 4911) 00:17:02.878 2.086 - 2.098: 49.5316% ( 366) 00:17:02.878 2.098 - 2.110: 55.6135% ( 805) 00:17:02.878 2.110 - 2.121: 62.1185% ( 861) 00:17:02.878 2.121 - 2.133: 63.2744% ( 153) 00:17:02.878 2.133 - 2.145: 73.1037% ( 1301) 00:17:02.878 2.145 - 2.157: 82.8271% ( 1287) 00:17:02.878 2.157 - 2.169: 83.5902% ( 101) 00:17:02.878 2.169 - 2.181: 86.5216% ( 388) 00:17:02.878 2.181 - 2.193: 88.6899% ( 287) 00:17:02.878 2.193 - 2.204: 89.1508% ( 61) 00:17:02.878 2.204 - 2.216: 90.5183% ( 181) 00:17:02.878 2.216 - 2.228: 92.2786% ( 233) 00:17:02.878 2.228 - 2.240: 94.0994% ( 241) 00:17:02.878 2.240 - 2.252: 94.8549% ( 100) 00:17:02.878 2.252 - 2.264: 95.0665% ( 28) 00:17:02.878 2.264 - 2.276: 95.1723% ( 14) 00:17:02.878 2.276 - 2.287: 95.2931% ( 16) 00:17:02.878 2.287 - 2.299: 95.4442% ( 20) 00:17:02.878 2.299 - 2.311: 95.8220% ( 50) 00:17:02.878 2.311 - 2.323: 95.9807% ( 21) 00:17:02.878 2.323 - 2.335: 96.0260% ( 6) 00:17:02.878 2.335 - 2.347: 96.0562% ( 4) 00:17:02.878 2.347 - 2.359: 96.1544% ( 13) 00:17:02.879 2.359 - 2.370: 96.3433% ( 25) 00:17:02.879 2.370 - 2.382: 96.6153% ( 36) 00:17:02.879 2.382 - 2.394: 96.9477% ( 44) 00:17:02.879 2.394 - 2.406: 97.2121% ( 35) 00:17:02.879 2.406 - 2.418: 97.3859% ( 23) 00:17:02.879 2.418 - 2.430: 97.6806% ( 39) 00:17:02.879 2.430 - 2.441: 97.8166% ( 18) 00:17:02.879 2.441 - 2.453: 97.9148% ( 13) 00:17:02.879 2.453 - 2.465: 98.0357% ( 16) 00:17:02.879 2.465 - 2.477: 98.1037% ( 9) 00:17:02.879 2.477 - 2.489: 98.1792% ( 10) 00:17:02.879 2.489 - 2.501: 98.2094% ( 4) 00:17:02.879 2.501 - 2.513: 98.2623% ( 7) 00:17:02.879 2.513 - 2.524: 98.2925% ( 4) 00:17:02.879 2.524 - 2.536: 98.3152% ( 3) 00:17:02.879 2.536 - 2.548: 98.3303% ( 2) 00:17:02.879 2.548 - 2.560: 98.3681% ( 5) 00:17:02.879 2.560 - 2.572: 98.3908% ( 3) 00:17:02.879 2.572 - 2.584: 98.4134% ( 3) 00:17:02.879 2.619 - 2.631: 98.4210% ( 1) 00:17:02.879 2.631 - 2.643: 98.4285% ( 1) 00:17:02.879 2.714 - 2.726: 98.4361% ( 1) 00:17:02.879 2.726 - 2.738: 98.4436% ( 1) 00:17:02.879 2.738 - 2.750: 98.4587% ( 2) 00:17:02.879 2.750 - 2.761: 98.4739% ( 2) 00:17:02.879 2.761 - 2.773: 98.4890% ( 2) 00:17:02.879 2.797 - 2.809: 98.5041% ( 2) 00:17:02.879 2.987 - 2.999: 98.5192% ( 2) 00:17:02.879 3.058 - 3.081: 98.5267% ( 1) 00:17:02.879 3.556 - 3.579: 98.5343% ( 1) 00:17:02.879 3.603 - 3.627: 98.5419% ( 1) 00:17:02.879 3.627 - 3.650: 98.5494% ( 1) 00:17:02.879 3.650 - 3.674: 98.5570% ( 1) 00:17:02.879 3.674 - 3.698: 98.5645% ( 1) 00:17:02.879 3.698 - 3.721: 98.5721% ( 1) 00:17:02.879 3.745 - 3.769: 98.5796% ( 1) 00:17:02.879 3.793 - 3.816: 98.5947% ( 2) 00:17:02.879 3.935 - 3.959: 98.6099% ( 2) 00:17:02.879 3.959 - 3.982: 98.6174% ( 1) 00:17:02.879 3.982 - 4.006: 98.6250% ( 1) 00:17:02.879 4.030 - 4.053: 98.6325% ( 1) 00:17:02.879 4.077 - 4.101: 98.6476% ( 2) 00:17:02.879 4.148 - 4.172: 98.6552% ( 1) 00:17:02.879 4.172 - 4.196: 98.6627% ( 1) 00:17:02.879 4.243 - 4.267: 98.6703% ( 1) 00:17:02.879 4.267 - 4.290: 98.6778% ( 1) 00:17:02.879 4.314 - 4.338: 98.6854% ( 1) 00:17:02.879 5.120 - 5.144: 98.6930% ( 1) 00:17:02.879 5.167 - 5.191: 9[2024-11-20 06:27:34.349088] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:02.879 8.7005% ( 1) 00:17:02.879 5.665 - 5.689: 98.7081% ( 1) 00:17:02.879 5.689 - 5.713: 98.7156% ( 1) 00:17:02.879 5.736 - 5.760: 98.7232% ( 1) 00:17:02.879 5.926 - 5.950: 98.7307% ( 1) 00:17:02.879 6.044 - 6.068: 98.7383% ( 1) 00:17:02.879 6.447 - 6.495: 98.7458% ( 1) 00:17:02.879 6.495 - 6.542: 98.7534% ( 1) 00:17:02.879 6.542 - 6.590: 98.7610% ( 1) 00:17:02.879 6.732 - 6.779: 98.7685% ( 1) 00:17:02.879 7.016 - 7.064: 98.7761% ( 1) 00:17:02.879 7.111 - 7.159: 98.7836% ( 1) 00:17:02.879 7.206 - 7.253: 98.7912% ( 1) 00:17:02.879 7.727 - 7.775: 98.7987% ( 1) 00:17:02.879 7.775 - 7.822: 98.8063% ( 1) 00:17:02.879 7.822 - 7.870: 98.8138% ( 1) 00:17:02.879 7.917 - 7.964: 98.8214% ( 1) 00:17:02.879 8.012 - 8.059: 98.8290% ( 1) 00:17:02.879 8.154 - 8.201: 98.8365% ( 1) 00:17:02.879 8.486 - 8.533: 98.8441% ( 1) 00:17:02.879 9.055 - 9.102: 98.8516% ( 1) 00:17:02.879 9.339 - 9.387: 98.8592% ( 1) 00:17:02.879 9.481 - 9.529: 98.8667% ( 1) 00:17:02.879 9.813 - 9.861: 98.8743% ( 1) 00:17:02.879 10.287 - 10.335: 98.8818% ( 1) 00:17:02.879 13.653 - 13.748: 98.8894% ( 1) 00:17:02.879 15.455 - 15.550: 98.8969% ( 1) 00:17:02.879 15.550 - 15.644: 98.9045% ( 1) 00:17:02.879 15.644 - 15.739: 98.9121% ( 1) 00:17:02.879 15.739 - 15.834: 98.9423% ( 4) 00:17:02.879 15.834 - 15.929: 98.9574% ( 2) 00:17:02.879 15.929 - 16.024: 98.9801% ( 3) 00:17:02.879 16.024 - 16.119: 98.9952% ( 2) 00:17:02.879 16.119 - 16.213: 99.0254% ( 4) 00:17:02.879 16.213 - 16.308: 99.0405% ( 2) 00:17:02.879 16.308 - 16.403: 99.0858% ( 6) 00:17:02.879 16.403 - 16.498: 99.0934% ( 1) 00:17:02.879 16.498 - 16.593: 99.1085% ( 2) 00:17:02.879 16.593 - 16.687: 99.1160% ( 1) 00:17:02.879 16.687 - 16.782: 99.1689% ( 7) 00:17:02.879 16.782 - 16.877: 99.1916% ( 3) 00:17:02.879 16.877 - 16.972: 99.2143% ( 3) 00:17:02.879 16.972 - 17.067: 99.2445% ( 4) 00:17:02.879 17.067 - 17.161: 99.2520% ( 1) 00:17:02.879 17.161 - 17.256: 99.2672% ( 2) 00:17:02.879 17.256 - 17.351: 99.2823% ( 2) 00:17:02.879 17.351 - 17.446: 99.2898% ( 1) 00:17:02.879 17.446 - 17.541: 99.3049% ( 2) 00:17:02.879 17.825 - 17.920: 99.3125% ( 1) 00:17:02.879 17.920 - 18.015: 99.3276% ( 2) 00:17:02.879 18.110 - 18.204: 99.3351% ( 1) 00:17:02.879 18.204 - 18.299: 99.3427% ( 1) 00:17:02.879 18.394 - 18.489: 99.3503% ( 1) 00:17:02.879 18.773 - 18.868: 99.3654% ( 2) 00:17:02.879 30.341 - 30.530: 99.3729% ( 1) 00:17:02.879 43.425 - 43.615: 99.3805% ( 1) 00:17:02.879 3373.890 - 3398.163: 99.3880% ( 1) 00:17:02.879 3980.705 - 4004.978: 99.8489% ( 61) 00:17:02.879 4004.978 - 4029.250: 99.9924% ( 19) 00:17:02.879 5000.154 - 5024.427: 100.0000% ( 1) 00:17:02.879 00:17:02.879 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:02.879 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:02.879 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:02.879 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:02.879 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:02.879 [ 00:17:02.879 { 00:17:02.879 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:02.879 "subtype": "Discovery", 00:17:02.879 "listen_addresses": [], 00:17:02.879 "allow_any_host": true, 00:17:02.879 "hosts": [] 00:17:02.879 }, 00:17:02.879 { 00:17:02.879 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:02.879 "subtype": "NVMe", 00:17:02.879 "listen_addresses": [ 00:17:02.879 { 00:17:02.879 "trtype": "VFIOUSER", 00:17:02.879 "adrfam": "IPv4", 00:17:02.879 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:02.879 "trsvcid": "0" 00:17:02.879 } 00:17:02.879 ], 00:17:02.879 "allow_any_host": true, 00:17:02.879 "hosts": [], 00:17:02.879 "serial_number": "SPDK1", 00:17:02.879 "model_number": "SPDK bdev Controller", 00:17:02.879 "max_namespaces": 32, 00:17:02.879 "min_cntlid": 1, 00:17:02.879 "max_cntlid": 65519, 00:17:02.879 "namespaces": [ 00:17:02.879 { 00:17:02.879 "nsid": 1, 00:17:02.879 "bdev_name": "Malloc1", 00:17:02.879 "name": "Malloc1", 00:17:02.879 "nguid": "B6775D0464114C1F9599B35DA7BB9BA6", 00:17:02.879 "uuid": "b6775d04-6411-4c1f-9599-b35da7bb9ba6" 00:17:02.879 }, 00:17:02.879 { 00:17:02.879 "nsid": 2, 00:17:02.879 "bdev_name": "Malloc3", 00:17:02.879 "name": "Malloc3", 00:17:02.879 "nguid": "F34B5B8815CD47CBB8D3FD83B091C29A", 00:17:02.879 "uuid": "f34b5b88-15cd-47cb-b8d3-fd83b091c29a" 00:17:02.879 } 00:17:02.879 ] 00:17:02.879 }, 00:17:02.879 { 00:17:02.879 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:02.879 "subtype": "NVMe", 00:17:02.879 "listen_addresses": [ 00:17:02.879 { 00:17:02.879 "trtype": "VFIOUSER", 00:17:02.879 "adrfam": "IPv4", 00:17:02.879 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:02.879 "trsvcid": "0" 00:17:02.879 } 00:17:02.879 ], 00:17:02.879 "allow_any_host": true, 00:17:02.879 "hosts": [], 00:17:02.879 "serial_number": "SPDK2", 00:17:02.879 "model_number": "SPDK bdev Controller", 00:17:02.879 "max_namespaces": 32, 00:17:02.879 "min_cntlid": 1, 00:17:02.879 "max_cntlid": 65519, 00:17:02.879 "namespaces": [ 00:17:02.879 { 00:17:02.879 "nsid": 1, 00:17:02.879 "bdev_name": "Malloc2", 00:17:02.879 "name": "Malloc2", 00:17:02.879 "nguid": "CD485E414A254C94A9E92F65916D74A8", 00:17:02.879 "uuid": "cd485e41-4a25-4c94-a9e9-2f65916d74a8" 00:17:02.879 } 00:17:02.879 ] 00:17:02.879 } 00:17:02.879 ] 00:17:02.879 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:02.879 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2072215 00:17:02.879 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:02.879 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:02.880 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1267 -- # local i=0 00:17:02.880 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:02.880 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:02.880 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1278 -- # return 0 00:17:02.880 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:02.880 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:03.183 [2024-11-20 06:27:34.834694] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:03.183 Malloc4 00:17:03.183 06:27:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:03.470 [2024-11-20 06:27:35.238715] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:03.470 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:03.470 Asynchronous Event Request test 00:17:03.470 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.470 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:03.470 Registering asynchronous event callbacks... 00:17:03.470 Starting namespace attribute notice tests for all controllers... 00:17:03.470 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:03.470 aer_cb - Changed Namespace 00:17:03.470 Cleaning up... 00:17:03.728 [ 00:17:03.728 { 00:17:03.728 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:03.728 "subtype": "Discovery", 00:17:03.728 "listen_addresses": [], 00:17:03.728 "allow_any_host": true, 00:17:03.728 "hosts": [] 00:17:03.728 }, 00:17:03.728 { 00:17:03.728 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:03.728 "subtype": "NVMe", 00:17:03.728 "listen_addresses": [ 00:17:03.728 { 00:17:03.728 "trtype": "VFIOUSER", 00:17:03.728 "adrfam": "IPv4", 00:17:03.728 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:03.728 "trsvcid": "0" 00:17:03.728 } 00:17:03.728 ], 00:17:03.728 "allow_any_host": true, 00:17:03.728 "hosts": [], 00:17:03.728 "serial_number": "SPDK1", 00:17:03.728 "model_number": "SPDK bdev Controller", 00:17:03.728 "max_namespaces": 32, 00:17:03.728 "min_cntlid": 1, 00:17:03.728 "max_cntlid": 65519, 00:17:03.728 "namespaces": [ 00:17:03.728 { 00:17:03.728 "nsid": 1, 00:17:03.728 "bdev_name": "Malloc1", 00:17:03.728 "name": "Malloc1", 00:17:03.728 "nguid": "B6775D0464114C1F9599B35DA7BB9BA6", 00:17:03.728 "uuid": "b6775d04-6411-4c1f-9599-b35da7bb9ba6" 00:17:03.728 }, 00:17:03.728 { 00:17:03.728 "nsid": 2, 00:17:03.728 "bdev_name": "Malloc3", 00:17:03.728 "name": "Malloc3", 00:17:03.728 "nguid": "F34B5B8815CD47CBB8D3FD83B091C29A", 00:17:03.728 "uuid": "f34b5b88-15cd-47cb-b8d3-fd83b091c29a" 00:17:03.728 } 00:17:03.728 ] 00:17:03.728 }, 00:17:03.728 { 00:17:03.728 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:03.728 "subtype": "NVMe", 00:17:03.728 "listen_addresses": [ 00:17:03.728 { 00:17:03.728 "trtype": "VFIOUSER", 00:17:03.728 "adrfam": "IPv4", 00:17:03.728 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:03.728 "trsvcid": "0" 00:17:03.728 } 00:17:03.728 ], 00:17:03.728 "allow_any_host": true, 00:17:03.728 "hosts": [], 00:17:03.728 "serial_number": "SPDK2", 00:17:03.728 "model_number": "SPDK bdev Controller", 00:17:03.728 "max_namespaces": 32, 00:17:03.728 "min_cntlid": 1, 00:17:03.728 "max_cntlid": 65519, 00:17:03.728 "namespaces": [ 00:17:03.728 { 00:17:03.728 "nsid": 1, 00:17:03.728 "bdev_name": "Malloc2", 00:17:03.728 "name": "Malloc2", 00:17:03.728 "nguid": "CD485E414A254C94A9E92F65916D74A8", 00:17:03.728 "uuid": "cd485e41-4a25-4c94-a9e9-2f65916d74a8" 00:17:03.728 }, 00:17:03.728 { 00:17:03.728 "nsid": 2, 00:17:03.728 "bdev_name": "Malloc4", 00:17:03.728 "name": "Malloc4", 00:17:03.728 "nguid": "49828F59A4A643AFB3C130562807E8A4", 00:17:03.728 "uuid": "49828f59-a4a6-43af-b3c1-30562807e8a4" 00:17:03.728 } 00:17:03.728 ] 00:17:03.728 } 00:17:03.728 ] 00:17:03.728 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2072215 00:17:03.728 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:03.728 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2065972 00:17:03.728 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2065972 ']' 00:17:03.728 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2065972 00:17:03.728 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:17:03.728 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:03.728 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2065972 00:17:03.728 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:03.728 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:03.987 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2065972' 00:17:03.987 killing process with pid 2065972 00:17:03.987 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2065972 00:17:03.987 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2065972 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2072466 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2072466' 00:17:04.245 Process pid: 2072466 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2072466 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # '[' -z 2072466 ']' 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:04.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:04.245 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:04.246 06:27:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:04.246 [2024-11-20 06:27:35.925727] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:04.246 [2024-11-20 06:27:35.926729] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:17:04.246 [2024-11-20 06:27:35.926793] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:04.246 [2024-11-20 06:27:35.992086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:04.246 [2024-11-20 06:27:36.051396] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:04.246 [2024-11-20 06:27:36.051451] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:04.246 [2024-11-20 06:27:36.051480] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:04.246 [2024-11-20 06:27:36.051492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:04.246 [2024-11-20 06:27:36.051502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:04.246 [2024-11-20 06:27:36.054324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:04.246 [2024-11-20 06:27:36.054357] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:04.246 [2024-11-20 06:27:36.054415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:04.246 [2024-11-20 06:27:36.054418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:04.505 [2024-11-20 06:27:36.152030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:04.505 [2024-11-20 06:27:36.152229] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:04.505 [2024-11-20 06:27:36.152522] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:04.505 [2024-11-20 06:27:36.153078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:04.505 [2024-11-20 06:27:36.153337] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:04.505 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:04.505 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@866 -- # return 0 00:17:04.505 06:27:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:05.441 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:05.702 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:05.702 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:05.702 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:05.702 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:05.702 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:05.964 Malloc1 00:17:05.964 06:27:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:06.222 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:06.788 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:06.788 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:06.788 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:06.788 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:07.046 Malloc2 00:17:07.311 06:27:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:07.576 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:07.835 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2072466 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # '[' -z 2072466 ']' 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # kill -0 2072466 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # uname 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2072466 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2072466' 00:17:08.092 killing process with pid 2072466 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@971 -- # kill 2072466 00:17:08.092 06:27:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@976 -- # wait 2072466 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:08.350 00:17:08.350 real 0m53.749s 00:17:08.350 user 3m28.011s 00:17:08.350 sys 0m3.978s 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:08.350 ************************************ 00:17:08.350 END TEST nvmf_vfio_user 00:17:08.350 ************************************ 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:08.350 ************************************ 00:17:08.350 START TEST nvmf_vfio_user_nvme_compliance 00:17:08.350 ************************************ 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:08.350 * Looking for test storage... 00:17:08.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lcov --version 00:17:08.350 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:08.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.610 --rc genhtml_branch_coverage=1 00:17:08.610 --rc genhtml_function_coverage=1 00:17:08.610 --rc genhtml_legend=1 00:17:08.610 --rc geninfo_all_blocks=1 00:17:08.610 --rc geninfo_unexecuted_blocks=1 00:17:08.610 00:17:08.610 ' 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:08.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.610 --rc genhtml_branch_coverage=1 00:17:08.610 --rc genhtml_function_coverage=1 00:17:08.610 --rc genhtml_legend=1 00:17:08.610 --rc geninfo_all_blocks=1 00:17:08.610 --rc geninfo_unexecuted_blocks=1 00:17:08.610 00:17:08.610 ' 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:08.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.610 --rc genhtml_branch_coverage=1 00:17:08.610 --rc genhtml_function_coverage=1 00:17:08.610 --rc genhtml_legend=1 00:17:08.610 --rc geninfo_all_blocks=1 00:17:08.610 --rc geninfo_unexecuted_blocks=1 00:17:08.610 00:17:08.610 ' 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:08.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:08.610 --rc genhtml_branch_coverage=1 00:17:08.610 --rc genhtml_function_coverage=1 00:17:08.610 --rc genhtml_legend=1 00:17:08.610 --rc geninfo_all_blocks=1 00:17:08.610 --rc geninfo_unexecuted_blocks=1 00:17:08.610 00:17:08.610 ' 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:08.610 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:08.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2072957 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2072957' 00:17:08.611 Process pid: 2072957 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2072957 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # '[' -z 2072957 ']' 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:08.611 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:08.611 [2024-11-20 06:27:40.308344] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:17:08.611 [2024-11-20 06:27:40.308437] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:08.611 [2024-11-20 06:27:40.376847] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:08.611 [2024-11-20 06:27:40.434985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:08.611 [2024-11-20 06:27:40.435038] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:08.611 [2024-11-20 06:27:40.435066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:08.611 [2024-11-20 06:27:40.435077] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:08.611 [2024-11-20 06:27:40.435086] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:08.611 [2024-11-20 06:27:40.436535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:08.611 [2024-11-20 06:27:40.436605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:08.611 [2024-11-20 06:27:40.436609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.870 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:08.870 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@866 -- # return 0 00:17:08.870 06:27:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:09.803 malloc0 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:09.803 06:27:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:10.061 00:17:10.061 00:17:10.061 CUnit - A unit testing framework for C - Version 2.1-3 00:17:10.061 http://cunit.sourceforge.net/ 00:17:10.061 00:17:10.061 00:17:10.061 Suite: nvme_compliance 00:17:10.061 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 06:27:41.794798] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.061 [2024-11-20 06:27:41.796218] vfio_user.c: 800:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:10.061 [2024-11-20 06:27:41.796242] vfio_user.c:5503:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:10.061 [2024-11-20 06:27:41.796268] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:10.061 [2024-11-20 06:27:41.797821] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.061 passed 00:17:10.061 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 06:27:41.882397] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.061 [2024-11-20 06:27:41.885421] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.319 passed 00:17:10.319 Test: admin_identify_ns ...[2024-11-20 06:27:41.973876] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.319 [2024-11-20 06:27:42.033322] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:10.319 [2024-11-20 06:27:42.040322] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:10.319 [2024-11-20 06:27:42.062448] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.319 passed 00:17:10.319 Test: admin_get_features_mandatory_features ...[2024-11-20 06:27:42.147186] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.319 [2024-11-20 06:27:42.150209] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.577 passed 00:17:10.577 Test: admin_get_features_optional_features ...[2024-11-20 06:27:42.234813] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.577 [2024-11-20 06:27:42.237832] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.577 passed 00:17:10.577 Test: admin_set_features_number_of_queues ...[2024-11-20 06:27:42.320824] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.834 [2024-11-20 06:27:42.425417] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.834 passed 00:17:10.834 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 06:27:42.510145] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.834 [2024-11-20 06:27:42.513169] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:10.834 passed 00:17:10.834 Test: admin_get_log_page_with_lpo ...[2024-11-20 06:27:42.594443] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:10.834 [2024-11-20 06:27:42.663321] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:11.092 [2024-11-20 06:27:42.676421] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.092 passed 00:17:11.092 Test: fabric_property_get ...[2024-11-20 06:27:42.759987] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.092 [2024-11-20 06:27:42.761261] vfio_user.c:5596:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:11.092 [2024-11-20 06:27:42.763007] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.092 passed 00:17:11.092 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 06:27:42.844550] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.092 [2024-11-20 06:27:42.845859] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:11.092 [2024-11-20 06:27:42.847566] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.092 passed 00:17:11.350 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 06:27:42.931878] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.350 [2024-11-20 06:27:43.016319] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:11.350 [2024-11-20 06:27:43.032315] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:11.350 [2024-11-20 06:27:43.037426] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.350 passed 00:17:11.350 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 06:27:43.121208] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.350 [2024-11-20 06:27:43.122531] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:11.350 [2024-11-20 06:27:43.124227] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.350 passed 00:17:11.608 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 06:27:43.208965] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.608 [2024-11-20 06:27:43.284310] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:11.608 [2024-11-20 06:27:43.308315] vfio_user.c:2305:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:11.608 [2024-11-20 06:27:43.313424] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.608 passed 00:17:11.608 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 06:27:43.397077] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.608 [2024-11-20 06:27:43.398385] vfio_user.c:2154:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:11.608 [2024-11-20 06:27:43.398442] vfio_user.c:2148:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:11.608 [2024-11-20 06:27:43.400092] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.608 passed 00:17:11.865 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 06:27:43.480380] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:11.866 [2024-11-20 06:27:43.574311] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:11.866 [2024-11-20 06:27:43.582316] vfio_user.c:2236:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:11.866 [2024-11-20 06:27:43.590316] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:11.866 [2024-11-20 06:27:43.598327] vfio_user.c:2034:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:11.866 [2024-11-20 06:27:43.627433] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:11.866 passed 00:17:12.123 Test: admin_create_io_sq_verify_pc ...[2024-11-20 06:27:43.711115] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:12.123 [2024-11-20 06:27:43.727329] vfio_user.c:2047:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:12.123 [2024-11-20 06:27:43.744614] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:12.123 passed 00:17:12.123 Test: admin_create_io_qp_max_qps ...[2024-11-20 06:27:43.827143] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:13.496 [2024-11-20 06:27:44.920338] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:13.496 [2024-11-20 06:27:45.308516] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:13.754 passed 00:17:13.754 Test: admin_create_io_sq_shared_cq ...[2024-11-20 06:27:45.393900] vfio_user.c:2832:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:13.754 [2024-11-20 06:27:45.525344] vfio_user.c:2315:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:13.754 [2024-11-20 06:27:45.562415] vfio_user.c:2794:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:14.012 passed 00:17:14.012 00:17:14.012 Run Summary: Type Total Ran Passed Failed Inactive 00:17:14.012 suites 1 1 n/a 0 0 00:17:14.012 tests 18 18 18 0 0 00:17:14.012 asserts 360 360 360 0 n/a 00:17:14.012 00:17:14.012 Elapsed time = 1.563 seconds 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2072957 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # '[' -z 2072957 ']' 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # kill -0 2072957 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # uname 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2072957 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2072957' 00:17:14.012 killing process with pid 2072957 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@971 -- # kill 2072957 00:17:14.012 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@976 -- # wait 2072957 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:14.272 00:17:14.272 real 0m5.771s 00:17:14.272 user 0m16.258s 00:17:14.272 sys 0m0.530s 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:14.272 ************************************ 00:17:14.272 END TEST nvmf_vfio_user_nvme_compliance 00:17:14.272 ************************************ 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:14.272 ************************************ 00:17:14.272 START TEST nvmf_vfio_user_fuzz 00:17:14.272 ************************************ 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:14.272 * Looking for test storage... 00:17:14.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lcov --version 00:17:14.272 06:27:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:14.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.272 --rc genhtml_branch_coverage=1 00:17:14.272 --rc genhtml_function_coverage=1 00:17:14.272 --rc genhtml_legend=1 00:17:14.272 --rc geninfo_all_blocks=1 00:17:14.272 --rc geninfo_unexecuted_blocks=1 00:17:14.272 00:17:14.272 ' 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:14.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.272 --rc genhtml_branch_coverage=1 00:17:14.272 --rc genhtml_function_coverage=1 00:17:14.272 --rc genhtml_legend=1 00:17:14.272 --rc geninfo_all_blocks=1 00:17:14.272 --rc geninfo_unexecuted_blocks=1 00:17:14.272 00:17:14.272 ' 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:14.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.272 --rc genhtml_branch_coverage=1 00:17:14.272 --rc genhtml_function_coverage=1 00:17:14.272 --rc genhtml_legend=1 00:17:14.272 --rc geninfo_all_blocks=1 00:17:14.272 --rc geninfo_unexecuted_blocks=1 00:17:14.272 00:17:14.272 ' 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:14.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:14.272 --rc genhtml_branch_coverage=1 00:17:14.272 --rc genhtml_function_coverage=1 00:17:14.272 --rc genhtml_legend=1 00:17:14.272 --rc geninfo_all_blocks=1 00:17:14.272 --rc geninfo_unexecuted_blocks=1 00:17:14.272 00:17:14.272 ' 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:14.272 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:14.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2073781 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2073781' 00:17:14.273 Process pid: 2073781 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2073781 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # '[' -z 2073781 ']' 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:14.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:14.273 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:14.531 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:14.531 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@866 -- # return 0 00:17:14.531 06:27:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.906 malloc0 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:15.906 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:15.907 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:15.907 06:27:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:17:48.006 Fuzzing completed. Shutting down the fuzz application 00:17:48.006 00:17:48.006 Dumping successful admin opcodes: 00:17:48.006 8, 9, 10, 24, 00:17:48.006 Dumping successful io opcodes: 00:17:48.006 0, 00:17:48.006 NS: 0x20000081ef00 I/O qp, Total commands completed: 719154, total successful commands: 2798, random_seed: 1384782528 00:17:48.006 NS: 0x20000081ef00 admin qp, Total commands completed: 161577, total successful commands: 1304, random_seed: 12010112 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2073781 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # '[' -z 2073781 ']' 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # kill -0 2073781 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # uname 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2073781 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2073781' 00:17:48.006 killing process with pid 2073781 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@971 -- # kill 2073781 00:17:48.006 06:28:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@976 -- # wait 2073781 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:17:48.006 00:17:48.006 real 0m32.211s 00:17:48.006 user 0m34.142s 00:17:48.006 sys 0m27.208s 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1128 -- # xtrace_disable 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:48.006 ************************************ 00:17:48.006 END TEST nvmf_vfio_user_fuzz 00:17:48.006 ************************************ 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:48.006 ************************************ 00:17:48.006 START TEST nvmf_auth_target 00:17:48.006 ************************************ 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:17:48.006 * Looking for test storage... 00:17:48.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lcov --version 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:17:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.006 --rc genhtml_branch_coverage=1 00:17:48.006 --rc genhtml_function_coverage=1 00:17:48.006 --rc genhtml_legend=1 00:17:48.006 --rc geninfo_all_blocks=1 00:17:48.006 --rc geninfo_unexecuted_blocks=1 00:17:48.006 00:17:48.006 ' 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:17:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.006 --rc genhtml_branch_coverage=1 00:17:48.006 --rc genhtml_function_coverage=1 00:17:48.006 --rc genhtml_legend=1 00:17:48.006 --rc geninfo_all_blocks=1 00:17:48.006 --rc geninfo_unexecuted_blocks=1 00:17:48.006 00:17:48.006 ' 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:17:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.006 --rc genhtml_branch_coverage=1 00:17:48.006 --rc genhtml_function_coverage=1 00:17:48.006 --rc genhtml_legend=1 00:17:48.006 --rc geninfo_all_blocks=1 00:17:48.006 --rc geninfo_unexecuted_blocks=1 00:17:48.006 00:17:48.006 ' 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:17:48.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.006 --rc genhtml_branch_coverage=1 00:17:48.006 --rc genhtml_function_coverage=1 00:17:48.006 --rc genhtml_legend=1 00:17:48.006 --rc geninfo_all_blocks=1 00:17:48.006 --rc geninfo_unexecuted_blocks=1 00:17:48.006 00:17:48.006 ' 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:48.006 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:48.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:17:48.007 06:28:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:17:48.943 Found 0000:09:00.0 (0x8086 - 0x159b) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:17:48.943 Found 0000:09:00.1 (0x8086 - 0x159b) 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.943 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:17:48.944 Found net devices under 0000:09:00.0: cvl_0_0 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:17:48.944 Found net devices under 0000:09:00.1: cvl_0_1 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:48.944 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.944 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:17:48.944 00:17:48.944 --- 10.0.0.2 ping statistics --- 00:17:48.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.944 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.944 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.944 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:17:48.944 00:17:48.944 --- 10.0.0.1 ping statistics --- 00:17:48.944 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.944 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2079140 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2079140 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2079140 ']' 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:48.944 06:28:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2079171 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=371be4cb4ad74c570757f7d097188e25d84bc43a7fcd2962 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.K1Q 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 371be4cb4ad74c570757f7d097188e25d84bc43a7fcd2962 0 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 371be4cb4ad74c570757f7d097188e25d84bc43a7fcd2962 0 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=371be4cb4ad74c570757f7d097188e25d84bc43a7fcd2962 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.K1Q 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.K1Q 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.K1Q 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=3589b65accda9fde6b47cab9230fb7693d39076878ce36ec60496cca36c7fa7e 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.FXn 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 3589b65accda9fde6b47cab9230fb7693d39076878ce36ec60496cca36c7fa7e 3 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 3589b65accda9fde6b47cab9230fb7693d39076878ce36ec60496cca36c7fa7e 3 00:17:49.511 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=3589b65accda9fde6b47cab9230fb7693d39076878ce36ec60496cca36c7fa7e 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.FXn 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.FXn 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.FXn 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=58903b1767ff6394a525b8857a51fc73 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.MXc 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 58903b1767ff6394a525b8857a51fc73 1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 58903b1767ff6394a525b8857a51fc73 1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=58903b1767ff6394a525b8857a51fc73 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.MXc 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.MXc 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.MXc 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=da1bc262908ebf9b5a2f58e2312a04489ad17dbcafafb484 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.GMB 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key da1bc262908ebf9b5a2f58e2312a04489ad17dbcafafb484 2 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 da1bc262908ebf9b5a2f58e2312a04489ad17dbcafafb484 2 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=da1bc262908ebf9b5a2f58e2312a04489ad17dbcafafb484 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.GMB 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.GMB 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.GMB 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=11a454dbb1a6494e44b51e7779b08420ca3f0275bb80200c 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.1k1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 11a454dbb1a6494e44b51e7779b08420ca3f0275bb80200c 2 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 11a454dbb1a6494e44b51e7779b08420ca3f0275bb80200c 2 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=11a454dbb1a6494e44b51e7779b08420ca3f0275bb80200c 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.1k1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.1k1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.1k1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=42bcc175ffbad0de5e273345aaed54fe 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.ONE 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 42bcc175ffbad0de5e273345aaed54fe 1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 42bcc175ffbad0de5e273345aaed54fe 1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=42bcc175ffbad0de5e273345aaed54fe 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:17:49.512 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.ONE 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.ONE 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.ONE 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=22a9c4ada384c28b5f852c3ffa60fb6fd2d24fb47b40a46093a0952f5bdb876c 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.2Ug 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 22a9c4ada384c28b5f852c3ffa60fb6fd2d24fb47b40a46093a0952f5bdb876c 3 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 22a9c4ada384c28b5f852c3ffa60fb6fd2d24fb47b40a46093a0952f5bdb876c 3 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=22a9c4ada384c28b5f852c3ffa60fb6fd2d24fb47b40a46093a0952f5bdb876c 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.2Ug 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.2Ug 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.2Ug 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2079140 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2079140 ']' 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:49.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:49.771 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.029 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.029 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:50.029 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2079171 /var/tmp/host.sock 00:17:50.029 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2079171 ']' 00:17:50.029 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/host.sock 00:17:50.029 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:17:50.029 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:17:50.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:17:50.029 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:17:50.029 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.K1Q 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.K1Q 00:17:50.287 06:28:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.K1Q 00:17:50.545 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.FXn ]] 00:17:50.545 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FXn 00:17:50.545 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.545 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.545 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.545 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FXn 00:17:50.545 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FXn 00:17:50.802 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:50.802 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MXc 00:17:50.802 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:50.802 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.802 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:50.802 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.MXc 00:17:50.802 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.MXc 00:17:51.060 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.GMB ]] 00:17:51.060 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GMB 00:17:51.060 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.060 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.060 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.060 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GMB 00:17:51.060 06:28:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GMB 00:17:51.318 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:51.318 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1k1 00:17:51.318 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.318 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.318 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.318 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.1k1 00:17:51.318 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.1k1 00:17:51.575 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.ONE ]] 00:17:51.575 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ONE 00:17:51.575 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.575 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.575 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.575 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ONE 00:17:51.575 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ONE 00:17:51.833 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:17:51.833 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2Ug 00:17:51.833 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:51.833 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.833 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:51.833 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.2Ug 00:17:51.833 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.2Ug 00:17:52.399 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:17:52.399 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:52.399 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.399 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:52.399 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.399 06:28:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.399 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.965 00:17:52.965 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:52.965 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:52.965 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:53.223 { 00:17:53.223 "cntlid": 1, 00:17:53.223 "qid": 0, 00:17:53.223 "state": "enabled", 00:17:53.223 "thread": "nvmf_tgt_poll_group_000", 00:17:53.223 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:53.223 "listen_address": { 00:17:53.223 "trtype": "TCP", 00:17:53.223 "adrfam": "IPv4", 00:17:53.223 "traddr": "10.0.0.2", 00:17:53.223 "trsvcid": "4420" 00:17:53.223 }, 00:17:53.223 "peer_address": { 00:17:53.223 "trtype": "TCP", 00:17:53.223 "adrfam": "IPv4", 00:17:53.223 "traddr": "10.0.0.1", 00:17:53.223 "trsvcid": "36872" 00:17:53.223 }, 00:17:53.223 "auth": { 00:17:53.223 "state": "completed", 00:17:53.223 "digest": "sha256", 00:17:53.223 "dhgroup": "null" 00:17:53.223 } 00:17:53.223 } 00:17:53.223 ]' 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.223 06:28:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.481 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:17:53.481 06:28:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:17:54.415 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:54.415 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:54.415 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:54.415 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.415 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.415 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.415 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:54.415 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.415 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.673 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.931 00:17:54.931 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:54.931 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:54.931 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.189 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.189 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.189 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:55.189 06:28:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.189 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:55.189 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:55.189 { 00:17:55.189 "cntlid": 3, 00:17:55.189 "qid": 0, 00:17:55.189 "state": "enabled", 00:17:55.189 "thread": "nvmf_tgt_poll_group_000", 00:17:55.189 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:55.189 "listen_address": { 00:17:55.189 "trtype": "TCP", 00:17:55.189 "adrfam": "IPv4", 00:17:55.189 "traddr": "10.0.0.2", 00:17:55.189 "trsvcid": "4420" 00:17:55.189 }, 00:17:55.189 "peer_address": { 00:17:55.189 "trtype": "TCP", 00:17:55.189 "adrfam": "IPv4", 00:17:55.189 "traddr": "10.0.0.1", 00:17:55.189 "trsvcid": "40366" 00:17:55.189 }, 00:17:55.189 "auth": { 00:17:55.189 "state": "completed", 00:17:55.189 "digest": "sha256", 00:17:55.189 "dhgroup": "null" 00:17:55.189 } 00:17:55.189 } 00:17:55.189 ]' 00:17:55.189 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:55.448 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:55.448 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:55.448 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:55.448 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:55.448 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.448 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.448 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.707 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:17:55.707 06:28:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:17:56.642 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.642 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:56.642 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.642 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.642 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.642 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:56.642 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.642 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:56.900 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:57.157 00:17:57.158 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:57.158 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:57.158 06:28:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.415 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.415 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.415 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:57.415 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.415 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:57.415 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:57.415 { 00:17:57.415 "cntlid": 5, 00:17:57.415 "qid": 0, 00:17:57.415 "state": "enabled", 00:17:57.415 "thread": "nvmf_tgt_poll_group_000", 00:17:57.415 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:57.415 "listen_address": { 00:17:57.415 "trtype": "TCP", 00:17:57.415 "adrfam": "IPv4", 00:17:57.415 "traddr": "10.0.0.2", 00:17:57.415 "trsvcid": "4420" 00:17:57.415 }, 00:17:57.415 "peer_address": { 00:17:57.415 "trtype": "TCP", 00:17:57.415 "adrfam": "IPv4", 00:17:57.415 "traddr": "10.0.0.1", 00:17:57.415 "trsvcid": "40386" 00:17:57.415 }, 00:17:57.415 "auth": { 00:17:57.415 "state": "completed", 00:17:57.415 "digest": "sha256", 00:17:57.415 "dhgroup": "null" 00:17:57.415 } 00:17:57.415 } 00:17:57.415 ]' 00:17:57.415 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:57.415 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:57.415 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:57.677 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:57.677 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:57.677 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.677 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.677 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.935 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:17:57.936 06:28:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:17:58.869 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.869 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:17:58.869 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:58.869 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.869 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:58.869 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:58.869 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:58.869 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.127 06:28:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:59.386 00:17:59.386 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:59.386 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:59.386 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.644 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.644 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:59.644 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:59.644 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.644 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:59.644 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:59.644 { 00:17:59.644 "cntlid": 7, 00:17:59.644 "qid": 0, 00:17:59.644 "state": "enabled", 00:17:59.644 "thread": "nvmf_tgt_poll_group_000", 00:17:59.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:17:59.644 "listen_address": { 00:17:59.644 "trtype": "TCP", 00:17:59.644 "adrfam": "IPv4", 00:17:59.644 "traddr": "10.0.0.2", 00:17:59.644 "trsvcid": "4420" 00:17:59.644 }, 00:17:59.644 "peer_address": { 00:17:59.644 "trtype": "TCP", 00:17:59.644 "adrfam": "IPv4", 00:17:59.644 "traddr": "10.0.0.1", 00:17:59.644 "trsvcid": "40416" 00:17:59.644 }, 00:17:59.644 "auth": { 00:17:59.644 "state": "completed", 00:17:59.644 "digest": "sha256", 00:17:59.644 "dhgroup": "null" 00:17:59.644 } 00:17:59.644 } 00:17:59.644 ]' 00:17:59.644 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:59.644 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:59.644 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:59.902 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:59.902 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:59.902 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:59.902 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:59.902 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.159 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:00.159 06:28:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:01.093 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.093 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.094 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:01.094 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.094 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.094 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.094 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:01.094 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:01.094 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:01.094 06:28:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.353 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:01.633 00:18:01.633 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:01.633 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:01.633 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.920 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.920 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.920 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:01.920 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.920 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:01.920 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:01.920 { 00:18:01.920 "cntlid": 9, 00:18:01.920 "qid": 0, 00:18:01.920 "state": "enabled", 00:18:01.920 "thread": "nvmf_tgt_poll_group_000", 00:18:01.920 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:01.920 "listen_address": { 00:18:01.920 "trtype": "TCP", 00:18:01.920 "adrfam": "IPv4", 00:18:01.920 "traddr": "10.0.0.2", 00:18:01.920 "trsvcid": "4420" 00:18:01.920 }, 00:18:01.920 "peer_address": { 00:18:01.920 "trtype": "TCP", 00:18:01.920 "adrfam": "IPv4", 00:18:01.920 "traddr": "10.0.0.1", 00:18:01.920 "trsvcid": "40442" 00:18:01.920 }, 00:18:01.920 "auth": { 00:18:01.920 "state": "completed", 00:18:01.920 "digest": "sha256", 00:18:01.920 "dhgroup": "ffdhe2048" 00:18:01.920 } 00:18:01.920 } 00:18:01.920 ]' 00:18:01.920 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:02.179 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:02.179 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:02.179 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:02.179 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:02.179 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:02.179 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:02.179 06:28:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.437 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:02.437 06:28:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:03.371 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.371 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:03.371 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.371 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.371 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.371 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:03.371 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.371 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:03.937 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:04.195 00:18:04.195 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:04.195 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:04.195 06:28:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:04.453 { 00:18:04.453 "cntlid": 11, 00:18:04.453 "qid": 0, 00:18:04.453 "state": "enabled", 00:18:04.453 "thread": "nvmf_tgt_poll_group_000", 00:18:04.453 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:04.453 "listen_address": { 00:18:04.453 "trtype": "TCP", 00:18:04.453 "adrfam": "IPv4", 00:18:04.453 "traddr": "10.0.0.2", 00:18:04.453 "trsvcid": "4420" 00:18:04.453 }, 00:18:04.453 "peer_address": { 00:18:04.453 "trtype": "TCP", 00:18:04.453 "adrfam": "IPv4", 00:18:04.453 "traddr": "10.0.0.1", 00:18:04.453 "trsvcid": "40458" 00:18:04.453 }, 00:18:04.453 "auth": { 00:18:04.453 "state": "completed", 00:18:04.453 "digest": "sha256", 00:18:04.453 "dhgroup": "ffdhe2048" 00:18:04.453 } 00:18:04.453 } 00:18:04.453 ]' 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.453 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.019 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:05.019 06:28:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:05.584 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.841 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:05.841 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:05.841 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.841 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:05.841 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:05.841 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:05.841 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.100 06:28:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:06.358 00:18:06.358 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:06.358 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.358 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:06.616 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.616 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.616 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:06.616 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.616 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:06.616 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:06.616 { 00:18:06.616 "cntlid": 13, 00:18:06.616 "qid": 0, 00:18:06.616 "state": "enabled", 00:18:06.616 "thread": "nvmf_tgt_poll_group_000", 00:18:06.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:06.616 "listen_address": { 00:18:06.616 "trtype": "TCP", 00:18:06.616 "adrfam": "IPv4", 00:18:06.616 "traddr": "10.0.0.2", 00:18:06.616 "trsvcid": "4420" 00:18:06.616 }, 00:18:06.616 "peer_address": { 00:18:06.616 "trtype": "TCP", 00:18:06.616 "adrfam": "IPv4", 00:18:06.616 "traddr": "10.0.0.1", 00:18:06.616 "trsvcid": "45138" 00:18:06.616 }, 00:18:06.616 "auth": { 00:18:06.616 "state": "completed", 00:18:06.616 "digest": "sha256", 00:18:06.616 "dhgroup": "ffdhe2048" 00:18:06.616 } 00:18:06.616 } 00:18:06.616 ]' 00:18:06.616 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:06.616 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:06.616 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:06.875 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:06.875 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:06.875 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.875 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.875 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:07.132 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:07.132 06:28:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:08.067 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.067 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.068 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:08.068 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.068 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.068 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.068 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:08.068 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:08.068 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.326 06:28:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:08.584 00:18:08.584 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:08.584 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:08.584 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.151 { 00:18:09.151 "cntlid": 15, 00:18:09.151 "qid": 0, 00:18:09.151 "state": "enabled", 00:18:09.151 "thread": "nvmf_tgt_poll_group_000", 00:18:09.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:09.151 "listen_address": { 00:18:09.151 "trtype": "TCP", 00:18:09.151 "adrfam": "IPv4", 00:18:09.151 "traddr": "10.0.0.2", 00:18:09.151 "trsvcid": "4420" 00:18:09.151 }, 00:18:09.151 "peer_address": { 00:18:09.151 "trtype": "TCP", 00:18:09.151 "adrfam": "IPv4", 00:18:09.151 "traddr": "10.0.0.1", 00:18:09.151 "trsvcid": "45164" 00:18:09.151 }, 00:18:09.151 "auth": { 00:18:09.151 "state": "completed", 00:18:09.151 "digest": "sha256", 00:18:09.151 "dhgroup": "ffdhe2048" 00:18:09.151 } 00:18:09.151 } 00:18:09.151 ]' 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.151 06:28:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.409 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:09.410 06:28:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:10.344 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.344 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.344 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:10.344 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.344 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.344 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.344 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.344 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.344 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.344 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.602 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:11.168 00:18:11.168 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.168 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.168 06:28:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.426 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.426 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.426 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:11.426 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.426 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:11.426 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.426 { 00:18:11.426 "cntlid": 17, 00:18:11.426 "qid": 0, 00:18:11.426 "state": "enabled", 00:18:11.426 "thread": "nvmf_tgt_poll_group_000", 00:18:11.426 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:11.426 "listen_address": { 00:18:11.426 "trtype": "TCP", 00:18:11.426 "adrfam": "IPv4", 00:18:11.426 "traddr": "10.0.0.2", 00:18:11.426 "trsvcid": "4420" 00:18:11.426 }, 00:18:11.426 "peer_address": { 00:18:11.427 "trtype": "TCP", 00:18:11.427 "adrfam": "IPv4", 00:18:11.427 "traddr": "10.0.0.1", 00:18:11.427 "trsvcid": "45194" 00:18:11.427 }, 00:18:11.427 "auth": { 00:18:11.427 "state": "completed", 00:18:11.427 "digest": "sha256", 00:18:11.427 "dhgroup": "ffdhe3072" 00:18:11.427 } 00:18:11.427 } 00:18:11.427 ]' 00:18:11.427 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.427 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.427 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.427 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:11.427 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.427 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.427 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.427 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.685 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:11.685 06:28:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:12.619 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.619 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.619 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:12.619 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.619 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.619 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.619 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.619 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:12.619 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.878 06:28:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:13.444 00:18:13.444 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:13.444 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:13.444 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:13.444 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:13.444 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:13.444 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:13.444 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.702 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:13.702 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:13.702 { 00:18:13.702 "cntlid": 19, 00:18:13.702 "qid": 0, 00:18:13.702 "state": "enabled", 00:18:13.702 "thread": "nvmf_tgt_poll_group_000", 00:18:13.702 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:13.702 "listen_address": { 00:18:13.702 "trtype": "TCP", 00:18:13.702 "adrfam": "IPv4", 00:18:13.702 "traddr": "10.0.0.2", 00:18:13.702 "trsvcid": "4420" 00:18:13.702 }, 00:18:13.702 "peer_address": { 00:18:13.702 "trtype": "TCP", 00:18:13.702 "adrfam": "IPv4", 00:18:13.702 "traddr": "10.0.0.1", 00:18:13.702 "trsvcid": "45212" 00:18:13.702 }, 00:18:13.702 "auth": { 00:18:13.702 "state": "completed", 00:18:13.702 "digest": "sha256", 00:18:13.702 "dhgroup": "ffdhe3072" 00:18:13.702 } 00:18:13.702 } 00:18:13.702 ]' 00:18:13.702 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:13.702 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:13.702 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.702 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:13.702 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.702 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.702 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.702 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.960 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:13.960 06:28:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:14.893 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:14.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:14.893 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:14.893 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:14.893 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.893 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:14.893 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.893 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:14.893 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.151 06:28:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:15.717 00:18:15.718 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.718 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.718 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:15.976 { 00:18:15.976 "cntlid": 21, 00:18:15.976 "qid": 0, 00:18:15.976 "state": "enabled", 00:18:15.976 "thread": "nvmf_tgt_poll_group_000", 00:18:15.976 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:15.976 "listen_address": { 00:18:15.976 "trtype": "TCP", 00:18:15.976 "adrfam": "IPv4", 00:18:15.976 "traddr": "10.0.0.2", 00:18:15.976 "trsvcid": "4420" 00:18:15.976 }, 00:18:15.976 "peer_address": { 00:18:15.976 "trtype": "TCP", 00:18:15.976 "adrfam": "IPv4", 00:18:15.976 "traddr": "10.0.0.1", 00:18:15.976 "trsvcid": "33128" 00:18:15.976 }, 00:18:15.976 "auth": { 00:18:15.976 "state": "completed", 00:18:15.976 "digest": "sha256", 00:18:15.976 "dhgroup": "ffdhe3072" 00:18:15.976 } 00:18:15.976 } 00:18:15.976 ]' 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:15.976 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.234 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:16.234 06:28:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:17.168 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.168 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.168 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:17.168 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.168 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.168 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.168 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:17.168 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.168 06:28:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.426 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:17.990 00:18:17.990 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.990 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.990 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.247 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.247 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.247 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:18.247 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.247 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:18.247 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:18.247 { 00:18:18.247 "cntlid": 23, 00:18:18.247 "qid": 0, 00:18:18.247 "state": "enabled", 00:18:18.247 "thread": "nvmf_tgt_poll_group_000", 00:18:18.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:18.247 "listen_address": { 00:18:18.247 "trtype": "TCP", 00:18:18.247 "adrfam": "IPv4", 00:18:18.247 "traddr": "10.0.0.2", 00:18:18.247 "trsvcid": "4420" 00:18:18.247 }, 00:18:18.247 "peer_address": { 00:18:18.247 "trtype": "TCP", 00:18:18.247 "adrfam": "IPv4", 00:18:18.247 "traddr": "10.0.0.1", 00:18:18.247 "trsvcid": "33164" 00:18:18.247 }, 00:18:18.247 "auth": { 00:18:18.247 "state": "completed", 00:18:18.247 "digest": "sha256", 00:18:18.247 "dhgroup": "ffdhe3072" 00:18:18.247 } 00:18:18.247 } 00:18:18.247 ]' 00:18:18.247 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:18.247 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:18.247 06:28:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:18.247 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:18.247 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:18.247 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.247 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.247 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.505 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:18.505 06:28:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:19.437 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:19.695 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:19.695 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:19.695 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.695 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.695 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.695 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:19.695 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:19.695 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:19.696 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:19.954 06:28:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.211 00:18:20.211 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.211 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.211 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.469 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.469 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.469 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:20.469 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.727 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:20.727 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.727 { 00:18:20.727 "cntlid": 25, 00:18:20.727 "qid": 0, 00:18:20.727 "state": "enabled", 00:18:20.727 "thread": "nvmf_tgt_poll_group_000", 00:18:20.727 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:20.727 "listen_address": { 00:18:20.727 "trtype": "TCP", 00:18:20.727 "adrfam": "IPv4", 00:18:20.727 "traddr": "10.0.0.2", 00:18:20.727 "trsvcid": "4420" 00:18:20.727 }, 00:18:20.727 "peer_address": { 00:18:20.727 "trtype": "TCP", 00:18:20.727 "adrfam": "IPv4", 00:18:20.727 "traddr": "10.0.0.1", 00:18:20.727 "trsvcid": "33192" 00:18:20.727 }, 00:18:20.727 "auth": { 00:18:20.727 "state": "completed", 00:18:20.727 "digest": "sha256", 00:18:20.727 "dhgroup": "ffdhe4096" 00:18:20.727 } 00:18:20.727 } 00:18:20.727 ]' 00:18:20.727 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:20.727 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:20.727 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:20.727 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:20.727 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:20.727 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:20.727 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:20.727 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:20.986 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:20.986 06:28:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:21.918 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.918 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.918 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:21.918 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:21.918 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.918 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:21.918 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.918 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:21.918 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.176 06:28:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.434 00:18:22.691 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.691 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.691 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.949 { 00:18:22.949 "cntlid": 27, 00:18:22.949 "qid": 0, 00:18:22.949 "state": "enabled", 00:18:22.949 "thread": "nvmf_tgt_poll_group_000", 00:18:22.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:22.949 "listen_address": { 00:18:22.949 "trtype": "TCP", 00:18:22.949 "adrfam": "IPv4", 00:18:22.949 "traddr": "10.0.0.2", 00:18:22.949 "trsvcid": "4420" 00:18:22.949 }, 00:18:22.949 "peer_address": { 00:18:22.949 "trtype": "TCP", 00:18:22.949 "adrfam": "IPv4", 00:18:22.949 "traddr": "10.0.0.1", 00:18:22.949 "trsvcid": "33228" 00:18:22.949 }, 00:18:22.949 "auth": { 00:18:22.949 "state": "completed", 00:18:22.949 "digest": "sha256", 00:18:22.949 "dhgroup": "ffdhe4096" 00:18:22.949 } 00:18:22.949 } 00:18:22.949 ]' 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.949 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:23.207 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:23.207 06:28:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:24.141 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:24.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:24.141 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:24.141 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.141 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.141 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.141 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:24.141 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.141 06:28:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.400 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:24.966 00:18:24.966 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:24.966 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:24.966 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.225 { 00:18:25.225 "cntlid": 29, 00:18:25.225 "qid": 0, 00:18:25.225 "state": "enabled", 00:18:25.225 "thread": "nvmf_tgt_poll_group_000", 00:18:25.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:25.225 "listen_address": { 00:18:25.225 "trtype": "TCP", 00:18:25.225 "adrfam": "IPv4", 00:18:25.225 "traddr": "10.0.0.2", 00:18:25.225 "trsvcid": "4420" 00:18:25.225 }, 00:18:25.225 "peer_address": { 00:18:25.225 "trtype": "TCP", 00:18:25.225 "adrfam": "IPv4", 00:18:25.225 "traddr": "10.0.0.1", 00:18:25.225 "trsvcid": "45182" 00:18:25.225 }, 00:18:25.225 "auth": { 00:18:25.225 "state": "completed", 00:18:25.225 "digest": "sha256", 00:18:25.225 "dhgroup": "ffdhe4096" 00:18:25.225 } 00:18:25.225 } 00:18:25.225 ]' 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.225 06:28:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.483 06:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:25.483 06:28:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:26.414 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.414 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:26.414 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.414 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.414 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.414 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.414 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.414 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:26.672 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:26.673 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:27.236 00:18:27.236 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:27.236 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:27.236 06:28:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.494 { 00:18:27.494 "cntlid": 31, 00:18:27.494 "qid": 0, 00:18:27.494 "state": "enabled", 00:18:27.494 "thread": "nvmf_tgt_poll_group_000", 00:18:27.494 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:27.494 "listen_address": { 00:18:27.494 "trtype": "TCP", 00:18:27.494 "adrfam": "IPv4", 00:18:27.494 "traddr": "10.0.0.2", 00:18:27.494 "trsvcid": "4420" 00:18:27.494 }, 00:18:27.494 "peer_address": { 00:18:27.494 "trtype": "TCP", 00:18:27.494 "adrfam": "IPv4", 00:18:27.494 "traddr": "10.0.0.1", 00:18:27.494 "trsvcid": "45208" 00:18:27.494 }, 00:18:27.494 "auth": { 00:18:27.494 "state": "completed", 00:18:27.494 "digest": "sha256", 00:18:27.494 "dhgroup": "ffdhe4096" 00:18:27.494 } 00:18:27.494 } 00:18:27.494 ]' 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.494 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.765 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:27.765 06:28:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:28.699 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.699 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.699 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:28.699 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:28.699 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.699 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:28.699 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:28.699 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.699 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:28.699 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.264 06:29:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:29.522 00:18:29.522 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:29.522 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:29.522 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.087 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.087 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.087 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:30.087 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.087 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:30.087 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.087 { 00:18:30.087 "cntlid": 33, 00:18:30.087 "qid": 0, 00:18:30.087 "state": "enabled", 00:18:30.087 "thread": "nvmf_tgt_poll_group_000", 00:18:30.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:30.087 "listen_address": { 00:18:30.087 "trtype": "TCP", 00:18:30.087 "adrfam": "IPv4", 00:18:30.087 "traddr": "10.0.0.2", 00:18:30.087 "trsvcid": "4420" 00:18:30.087 }, 00:18:30.087 "peer_address": { 00:18:30.087 "trtype": "TCP", 00:18:30.087 "adrfam": "IPv4", 00:18:30.087 "traddr": "10.0.0.1", 00:18:30.087 "trsvcid": "45238" 00:18:30.087 }, 00:18:30.087 "auth": { 00:18:30.087 "state": "completed", 00:18:30.087 "digest": "sha256", 00:18:30.087 "dhgroup": "ffdhe6144" 00:18:30.087 } 00:18:30.087 } 00:18:30.087 ]' 00:18:30.087 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.087 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.087 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.088 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:30.088 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.088 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.088 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.088 06:29:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.347 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:30.347 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:31.346 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.347 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:31.347 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.347 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.347 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.347 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.347 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.347 06:29:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:31.605 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:32.170 00:18:32.170 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:32.170 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:32.170 06:29:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:32.427 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:32.427 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:32.427 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:32.427 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.427 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:32.427 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:32.427 { 00:18:32.428 "cntlid": 35, 00:18:32.428 "qid": 0, 00:18:32.428 "state": "enabled", 00:18:32.428 "thread": "nvmf_tgt_poll_group_000", 00:18:32.428 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:32.428 "listen_address": { 00:18:32.428 "trtype": "TCP", 00:18:32.428 "adrfam": "IPv4", 00:18:32.428 "traddr": "10.0.0.2", 00:18:32.428 "trsvcid": "4420" 00:18:32.428 }, 00:18:32.428 "peer_address": { 00:18:32.428 "trtype": "TCP", 00:18:32.428 "adrfam": "IPv4", 00:18:32.428 "traddr": "10.0.0.1", 00:18:32.428 "trsvcid": "45268" 00:18:32.428 }, 00:18:32.428 "auth": { 00:18:32.428 "state": "completed", 00:18:32.428 "digest": "sha256", 00:18:32.428 "dhgroup": "ffdhe6144" 00:18:32.428 } 00:18:32.428 } 00:18:32.428 ]' 00:18:32.428 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:32.428 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:32.428 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:32.428 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:32.428 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.428 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.428 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.428 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.686 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:32.686 06:29:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:33.617 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:33.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:33.617 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:33.617 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.617 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.617 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.617 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:33.617 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.617 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:33.875 06:29:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:34.441 00:18:34.441 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:34.441 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:34.441 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:34.699 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:34.699 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:34.699 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:34.699 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.699 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:34.699 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:34.699 { 00:18:34.699 "cntlid": 37, 00:18:34.699 "qid": 0, 00:18:34.699 "state": "enabled", 00:18:34.699 "thread": "nvmf_tgt_poll_group_000", 00:18:34.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:34.700 "listen_address": { 00:18:34.700 "trtype": "TCP", 00:18:34.700 "adrfam": "IPv4", 00:18:34.700 "traddr": "10.0.0.2", 00:18:34.700 "trsvcid": "4420" 00:18:34.700 }, 00:18:34.700 "peer_address": { 00:18:34.700 "trtype": "TCP", 00:18:34.700 "adrfam": "IPv4", 00:18:34.700 "traddr": "10.0.0.1", 00:18:34.700 "trsvcid": "45282" 00:18:34.700 }, 00:18:34.700 "auth": { 00:18:34.700 "state": "completed", 00:18:34.700 "digest": "sha256", 00:18:34.700 "dhgroup": "ffdhe6144" 00:18:34.700 } 00:18:34.700 } 00:18:34.700 ]' 00:18:34.700 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:34.700 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:34.700 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:34.958 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:34.958 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:34.958 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:34.958 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:34.958 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.216 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:35.216 06:29:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:36.151 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.151 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.151 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:36.151 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.151 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.151 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.151 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.151 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.151 06:29:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.409 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:36.975 00:18:36.975 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.975 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.975 06:29:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.233 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.233 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.233 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:37.234 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.234 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:37.234 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.234 { 00:18:37.234 "cntlid": 39, 00:18:37.234 "qid": 0, 00:18:37.234 "state": "enabled", 00:18:37.234 "thread": "nvmf_tgt_poll_group_000", 00:18:37.234 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:37.234 "listen_address": { 00:18:37.234 "trtype": "TCP", 00:18:37.234 "adrfam": "IPv4", 00:18:37.234 "traddr": "10.0.0.2", 00:18:37.234 "trsvcid": "4420" 00:18:37.234 }, 00:18:37.234 "peer_address": { 00:18:37.234 "trtype": "TCP", 00:18:37.234 "adrfam": "IPv4", 00:18:37.234 "traddr": "10.0.0.1", 00:18:37.234 "trsvcid": "48550" 00:18:37.234 }, 00:18:37.234 "auth": { 00:18:37.234 "state": "completed", 00:18:37.234 "digest": "sha256", 00:18:37.234 "dhgroup": "ffdhe6144" 00:18:37.234 } 00:18:37.234 } 00:18:37.234 ]' 00:18:37.234 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.491 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.491 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.491 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.491 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.491 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.491 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.491 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.749 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:37.749 06:29:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:38.682 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:38.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:38.682 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:38.682 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.682 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.682 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.682 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:38.682 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:38.682 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.682 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:38.940 06:29:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.873 00:18:39.873 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:39.873 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:39.873 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.131 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.132 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.132 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:40.132 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.132 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:40.132 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.132 { 00:18:40.132 "cntlid": 41, 00:18:40.132 "qid": 0, 00:18:40.132 "state": "enabled", 00:18:40.132 "thread": "nvmf_tgt_poll_group_000", 00:18:40.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:40.132 "listen_address": { 00:18:40.132 "trtype": "TCP", 00:18:40.132 "adrfam": "IPv4", 00:18:40.132 "traddr": "10.0.0.2", 00:18:40.132 "trsvcid": "4420" 00:18:40.132 }, 00:18:40.132 "peer_address": { 00:18:40.132 "trtype": "TCP", 00:18:40.132 "adrfam": "IPv4", 00:18:40.132 "traddr": "10.0.0.1", 00:18:40.132 "trsvcid": "48576" 00:18:40.132 }, 00:18:40.132 "auth": { 00:18:40.132 "state": "completed", 00:18:40.132 "digest": "sha256", 00:18:40.132 "dhgroup": "ffdhe8192" 00:18:40.132 } 00:18:40.132 } 00:18:40.132 ]' 00:18:40.132 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.132 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.132 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.132 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.132 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.390 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.390 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.390 06:29:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.648 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:40.648 06:29:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:41.582 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.582 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.582 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:41.582 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.582 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.582 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.582 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.582 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.582 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.840 06:29:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:42.775 00:18:42.775 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:42.775 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.775 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.034 { 00:18:43.034 "cntlid": 43, 00:18:43.034 "qid": 0, 00:18:43.034 "state": "enabled", 00:18:43.034 "thread": "nvmf_tgt_poll_group_000", 00:18:43.034 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:43.034 "listen_address": { 00:18:43.034 "trtype": "TCP", 00:18:43.034 "adrfam": "IPv4", 00:18:43.034 "traddr": "10.0.0.2", 00:18:43.034 "trsvcid": "4420" 00:18:43.034 }, 00:18:43.034 "peer_address": { 00:18:43.034 "trtype": "TCP", 00:18:43.034 "adrfam": "IPv4", 00:18:43.034 "traddr": "10.0.0.1", 00:18:43.034 "trsvcid": "48604" 00:18:43.034 }, 00:18:43.034 "auth": { 00:18:43.034 "state": "completed", 00:18:43.034 "digest": "sha256", 00:18:43.034 "dhgroup": "ffdhe8192" 00:18:43.034 } 00:18:43.034 } 00:18:43.034 ]' 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:43.034 06:29:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:43.292 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:43.292 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:44.227 06:29:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:44.227 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:44.227 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:44.227 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.227 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.227 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.227 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:44.227 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:44.227 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:44.484 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:44.484 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:44.484 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:44.484 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:44.484 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:44.484 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:44.484 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.484 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:44.484 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:44.484 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:44.485 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.485 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:44.485 06:29:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:45.418 00:18:45.418 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.418 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.418 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.676 { 00:18:45.676 "cntlid": 45, 00:18:45.676 "qid": 0, 00:18:45.676 "state": "enabled", 00:18:45.676 "thread": "nvmf_tgt_poll_group_000", 00:18:45.676 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:45.676 "listen_address": { 00:18:45.676 "trtype": "TCP", 00:18:45.676 "adrfam": "IPv4", 00:18:45.676 "traddr": "10.0.0.2", 00:18:45.676 "trsvcid": "4420" 00:18:45.676 }, 00:18:45.676 "peer_address": { 00:18:45.676 "trtype": "TCP", 00:18:45.676 "adrfam": "IPv4", 00:18:45.676 "traddr": "10.0.0.1", 00:18:45.676 "trsvcid": "44746" 00:18:45.676 }, 00:18:45.676 "auth": { 00:18:45.676 "state": "completed", 00:18:45.676 "digest": "sha256", 00:18:45.676 "dhgroup": "ffdhe8192" 00:18:45.676 } 00:18:45.676 } 00:18:45.676 ]' 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.676 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.935 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.935 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.935 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.193 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:46.194 06:29:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:47.127 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:47.127 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:47.127 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:47.127 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.127 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.127 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.127 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:47.127 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.127 06:29:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:47.384 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:47.384 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:47.385 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:48.317 00:18:48.317 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.317 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.317 06:29:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.317 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.317 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.317 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:48.317 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.317 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:48.317 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.317 { 00:18:48.317 "cntlid": 47, 00:18:48.317 "qid": 0, 00:18:48.317 "state": "enabled", 00:18:48.317 "thread": "nvmf_tgt_poll_group_000", 00:18:48.317 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:48.317 "listen_address": { 00:18:48.317 "trtype": "TCP", 00:18:48.317 "adrfam": "IPv4", 00:18:48.317 "traddr": "10.0.0.2", 00:18:48.317 "trsvcid": "4420" 00:18:48.317 }, 00:18:48.317 "peer_address": { 00:18:48.317 "trtype": "TCP", 00:18:48.317 "adrfam": "IPv4", 00:18:48.317 "traddr": "10.0.0.1", 00:18:48.317 "trsvcid": "44768" 00:18:48.317 }, 00:18:48.317 "auth": { 00:18:48.317 "state": "completed", 00:18:48.317 "digest": "sha256", 00:18:48.317 "dhgroup": "ffdhe8192" 00:18:48.317 } 00:18:48.317 } 00:18:48.317 ]' 00:18:48.317 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:48.575 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:48.575 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:48.575 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:48.575 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:48.575 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:48.575 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:48.575 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:48.833 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:48.833 06:29:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:49.766 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.766 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.766 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:49.766 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:49.766 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.766 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:49.766 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:49.766 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:49.766 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.766 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.766 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.024 06:29:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:50.282 00:18:50.282 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.282 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.282 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.541 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.541 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.541 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:50.541 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.541 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:50.541 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.541 { 00:18:50.541 "cntlid": 49, 00:18:50.541 "qid": 0, 00:18:50.541 "state": "enabled", 00:18:50.541 "thread": "nvmf_tgt_poll_group_000", 00:18:50.541 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:50.541 "listen_address": { 00:18:50.541 "trtype": "TCP", 00:18:50.541 "adrfam": "IPv4", 00:18:50.541 "traddr": "10.0.0.2", 00:18:50.541 "trsvcid": "4420" 00:18:50.541 }, 00:18:50.541 "peer_address": { 00:18:50.541 "trtype": "TCP", 00:18:50.541 "adrfam": "IPv4", 00:18:50.541 "traddr": "10.0.0.1", 00:18:50.541 "trsvcid": "44784" 00:18:50.541 }, 00:18:50.541 "auth": { 00:18:50.541 "state": "completed", 00:18:50.541 "digest": "sha384", 00:18:50.541 "dhgroup": "null" 00:18:50.541 } 00:18:50.541 } 00:18:50.541 ]' 00:18:50.541 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.541 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.541 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.799 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:50.799 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.799 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.799 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.799 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:51.056 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:51.056 06:29:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:18:51.989 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.989 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.989 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:51.989 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:51.989 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.989 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:51.989 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.989 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:51.990 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.247 06:29:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:52.506 00:18:52.506 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:52.506 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:52.506 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.764 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.764 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.764 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:52.764 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.764 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:52.764 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.764 { 00:18:52.764 "cntlid": 51, 00:18:52.764 "qid": 0, 00:18:52.764 "state": "enabled", 00:18:52.764 "thread": "nvmf_tgt_poll_group_000", 00:18:52.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:52.764 "listen_address": { 00:18:52.764 "trtype": "TCP", 00:18:52.764 "adrfam": "IPv4", 00:18:52.764 "traddr": "10.0.0.2", 00:18:52.764 "trsvcid": "4420" 00:18:52.764 }, 00:18:52.764 "peer_address": { 00:18:52.764 "trtype": "TCP", 00:18:52.764 "adrfam": "IPv4", 00:18:52.764 "traddr": "10.0.0.1", 00:18:52.764 "trsvcid": "44812" 00:18:52.764 }, 00:18:52.764 "auth": { 00:18:52.764 "state": "completed", 00:18:52.764 "digest": "sha384", 00:18:52.764 "dhgroup": "null" 00:18:52.764 } 00:18:52.764 } 00:18:52.764 ]' 00:18:52.764 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.021 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.021 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.021 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:53.021 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.021 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.021 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.021 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.279 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:53.279 06:29:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:18:54.217 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.217 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.217 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:54.217 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.217 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.217 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.217 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.217 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:54.217 06:29:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.476 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:54.733 00:18:54.733 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:54.990 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:54.990 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.248 { 00:18:55.248 "cntlid": 53, 00:18:55.248 "qid": 0, 00:18:55.248 "state": "enabled", 00:18:55.248 "thread": "nvmf_tgt_poll_group_000", 00:18:55.248 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:55.248 "listen_address": { 00:18:55.248 "trtype": "TCP", 00:18:55.248 "adrfam": "IPv4", 00:18:55.248 "traddr": "10.0.0.2", 00:18:55.248 "trsvcid": "4420" 00:18:55.248 }, 00:18:55.248 "peer_address": { 00:18:55.248 "trtype": "TCP", 00:18:55.248 "adrfam": "IPv4", 00:18:55.248 "traddr": "10.0.0.1", 00:18:55.248 "trsvcid": "43500" 00:18:55.248 }, 00:18:55.248 "auth": { 00:18:55.248 "state": "completed", 00:18:55.248 "digest": "sha384", 00:18:55.248 "dhgroup": "null" 00:18:55.248 } 00:18:55.248 } 00:18:55.248 ]' 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.248 06:29:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.506 06:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:55.506 06:29:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:18:56.439 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.439 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:56.439 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.439 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.439 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.439 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.439 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:56.439 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:56.697 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.262 00:18:57.262 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:57.262 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:57.262 06:29:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:57.521 { 00:18:57.521 "cntlid": 55, 00:18:57.521 "qid": 0, 00:18:57.521 "state": "enabled", 00:18:57.521 "thread": "nvmf_tgt_poll_group_000", 00:18:57.521 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:57.521 "listen_address": { 00:18:57.521 "trtype": "TCP", 00:18:57.521 "adrfam": "IPv4", 00:18:57.521 "traddr": "10.0.0.2", 00:18:57.521 "trsvcid": "4420" 00:18:57.521 }, 00:18:57.521 "peer_address": { 00:18:57.521 "trtype": "TCP", 00:18:57.521 "adrfam": "IPv4", 00:18:57.521 "traddr": "10.0.0.1", 00:18:57.521 "trsvcid": "43520" 00:18:57.521 }, 00:18:57.521 "auth": { 00:18:57.521 "state": "completed", 00:18:57.521 "digest": "sha384", 00:18:57.521 "dhgroup": "null" 00:18:57.521 } 00:18:57.521 } 00:18:57.521 ]' 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:57.521 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.522 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.522 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.522 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.783 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:57.783 06:29:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:18:58.717 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:58.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:58.717 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:18:58.717 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.717 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.717 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.717 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:58.718 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:58.718 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.718 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:58.976 06:29:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.235 00:18:59.235 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.235 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.235 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.800 { 00:18:59.800 "cntlid": 57, 00:18:59.800 "qid": 0, 00:18:59.800 "state": "enabled", 00:18:59.800 "thread": "nvmf_tgt_poll_group_000", 00:18:59.800 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:18:59.800 "listen_address": { 00:18:59.800 "trtype": "TCP", 00:18:59.800 "adrfam": "IPv4", 00:18:59.800 "traddr": "10.0.0.2", 00:18:59.800 "trsvcid": "4420" 00:18:59.800 }, 00:18:59.800 "peer_address": { 00:18:59.800 "trtype": "TCP", 00:18:59.800 "adrfam": "IPv4", 00:18:59.800 "traddr": "10.0.0.1", 00:18:59.800 "trsvcid": "43548" 00:18:59.800 }, 00:18:59.800 "auth": { 00:18:59.800 "state": "completed", 00:18:59.800 "digest": "sha384", 00:18:59.800 "dhgroup": "ffdhe2048" 00:18:59.800 } 00:18:59.800 } 00:18:59.800 ]' 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:59.800 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.058 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:00.058 06:29:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:01.018 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:01.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:01.018 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:01.018 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.019 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.019 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.019 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:01.019 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:01.019 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.327 06:29:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.608 00:19:01.608 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.608 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.608 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.865 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.865 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.865 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:01.865 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.865 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:01.865 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.865 { 00:19:01.865 "cntlid": 59, 00:19:01.865 "qid": 0, 00:19:01.865 "state": "enabled", 00:19:01.866 "thread": "nvmf_tgt_poll_group_000", 00:19:01.866 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:01.866 "listen_address": { 00:19:01.866 "trtype": "TCP", 00:19:01.866 "adrfam": "IPv4", 00:19:01.866 "traddr": "10.0.0.2", 00:19:01.866 "trsvcid": "4420" 00:19:01.866 }, 00:19:01.866 "peer_address": { 00:19:01.866 "trtype": "TCP", 00:19:01.866 "adrfam": "IPv4", 00:19:01.866 "traddr": "10.0.0.1", 00:19:01.866 "trsvcid": "43570" 00:19:01.866 }, 00:19:01.866 "auth": { 00:19:01.866 "state": "completed", 00:19:01.866 "digest": "sha384", 00:19:01.866 "dhgroup": "ffdhe2048" 00:19:01.866 } 00:19:01.866 } 00:19:01.866 ]' 00:19:01.866 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.866 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.866 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.866 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:01.866 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.866 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.866 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.866 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:02.432 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:02.432 06:29:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:02.998 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.256 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.256 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:03.256 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.256 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.256 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.256 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.256 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.256 06:29:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.514 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:03.772 00:19:03.772 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:03.772 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.772 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.030 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.030 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.030 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.030 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.030 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.030 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.030 { 00:19:04.030 "cntlid": 61, 00:19:04.030 "qid": 0, 00:19:04.030 "state": "enabled", 00:19:04.030 "thread": "nvmf_tgt_poll_group_000", 00:19:04.030 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:04.030 "listen_address": { 00:19:04.030 "trtype": "TCP", 00:19:04.030 "adrfam": "IPv4", 00:19:04.030 "traddr": "10.0.0.2", 00:19:04.030 "trsvcid": "4420" 00:19:04.030 }, 00:19:04.030 "peer_address": { 00:19:04.030 "trtype": "TCP", 00:19:04.030 "adrfam": "IPv4", 00:19:04.030 "traddr": "10.0.0.1", 00:19:04.030 "trsvcid": "43598" 00:19:04.030 }, 00:19:04.030 "auth": { 00:19:04.031 "state": "completed", 00:19:04.031 "digest": "sha384", 00:19:04.031 "dhgroup": "ffdhe2048" 00:19:04.031 } 00:19:04.031 } 00:19:04.031 ]' 00:19:04.031 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.031 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.031 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.031 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:04.031 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.289 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.289 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.289 06:29:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:04.547 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:04.547 06:29:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:05.480 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.480 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:05.480 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.480 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.480 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.480 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.480 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.480 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.738 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:05.996 00:19:05.996 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:05.996 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:05.996 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.254 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.254 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.254 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:06.254 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.254 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:06.254 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.254 { 00:19:06.254 "cntlid": 63, 00:19:06.254 "qid": 0, 00:19:06.254 "state": "enabled", 00:19:06.254 "thread": "nvmf_tgt_poll_group_000", 00:19:06.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:06.254 "listen_address": { 00:19:06.254 "trtype": "TCP", 00:19:06.254 "adrfam": "IPv4", 00:19:06.254 "traddr": "10.0.0.2", 00:19:06.254 "trsvcid": "4420" 00:19:06.254 }, 00:19:06.254 "peer_address": { 00:19:06.254 "trtype": "TCP", 00:19:06.254 "adrfam": "IPv4", 00:19:06.254 "traddr": "10.0.0.1", 00:19:06.254 "trsvcid": "41784" 00:19:06.254 }, 00:19:06.254 "auth": { 00:19:06.254 "state": "completed", 00:19:06.254 "digest": "sha384", 00:19:06.254 "dhgroup": "ffdhe2048" 00:19:06.254 } 00:19:06.254 } 00:19:06.254 ]' 00:19:06.254 06:29:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.254 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.254 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.254 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:06.254 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.512 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.512 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.512 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.771 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:06.771 06:29:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:07.704 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.704 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:07.704 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.704 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.704 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.704 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:07.704 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.704 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:07.704 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:07.962 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:08.220 00:19:08.220 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:08.220 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:08.220 06:29:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:08.499 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:08.499 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:08.499 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:08.499 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.499 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:08.499 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:08.499 { 00:19:08.499 "cntlid": 65, 00:19:08.499 "qid": 0, 00:19:08.499 "state": "enabled", 00:19:08.499 "thread": "nvmf_tgt_poll_group_000", 00:19:08.499 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:08.499 "listen_address": { 00:19:08.499 "trtype": "TCP", 00:19:08.499 "adrfam": "IPv4", 00:19:08.499 "traddr": "10.0.0.2", 00:19:08.500 "trsvcid": "4420" 00:19:08.500 }, 00:19:08.500 "peer_address": { 00:19:08.500 "trtype": "TCP", 00:19:08.500 "adrfam": "IPv4", 00:19:08.500 "traddr": "10.0.0.1", 00:19:08.500 "trsvcid": "41806" 00:19:08.500 }, 00:19:08.500 "auth": { 00:19:08.500 "state": "completed", 00:19:08.500 "digest": "sha384", 00:19:08.500 "dhgroup": "ffdhe3072" 00:19:08.500 } 00:19:08.500 } 00:19:08.500 ]' 00:19:08.500 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:08.500 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:08.500 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:08.500 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:08.500 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:08.758 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:08.758 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:08.758 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.017 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:09.017 06:29:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:09.949 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:09.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:09.950 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:09.950 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:09.950 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.950 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:09.950 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:09.950 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:09.950 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.207 06:29:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:10.465 00:19:10.465 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.465 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.465 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.723 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.723 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:10.723 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:10.723 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.723 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:10.723 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:10.723 { 00:19:10.723 "cntlid": 67, 00:19:10.723 "qid": 0, 00:19:10.723 "state": "enabled", 00:19:10.723 "thread": "nvmf_tgt_poll_group_000", 00:19:10.723 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:10.723 "listen_address": { 00:19:10.724 "trtype": "TCP", 00:19:10.724 "adrfam": "IPv4", 00:19:10.724 "traddr": "10.0.0.2", 00:19:10.724 "trsvcid": "4420" 00:19:10.724 }, 00:19:10.724 "peer_address": { 00:19:10.724 "trtype": "TCP", 00:19:10.724 "adrfam": "IPv4", 00:19:10.724 "traddr": "10.0.0.1", 00:19:10.724 "trsvcid": "41846" 00:19:10.724 }, 00:19:10.724 "auth": { 00:19:10.724 "state": "completed", 00:19:10.724 "digest": "sha384", 00:19:10.724 "dhgroup": "ffdhe3072" 00:19:10.724 } 00:19:10.724 } 00:19:10.724 ]' 00:19:10.724 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:10.724 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:10.724 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:10.981 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:10.981 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:10.981 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:10.981 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:10.981 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:11.239 06:29:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:12.175 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.175 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:12.175 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.175 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.175 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.175 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.175 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:12.175 06:29:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.434 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:12.692 00:19:12.692 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.692 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.692 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.951 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.951 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.951 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:12.951 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.951 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:12.951 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.951 { 00:19:12.951 "cntlid": 69, 00:19:12.951 "qid": 0, 00:19:12.951 "state": "enabled", 00:19:12.951 "thread": "nvmf_tgt_poll_group_000", 00:19:12.951 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:12.951 "listen_address": { 00:19:12.951 "trtype": "TCP", 00:19:12.951 "adrfam": "IPv4", 00:19:12.951 "traddr": "10.0.0.2", 00:19:12.951 "trsvcid": "4420" 00:19:12.951 }, 00:19:12.951 "peer_address": { 00:19:12.951 "trtype": "TCP", 00:19:12.951 "adrfam": "IPv4", 00:19:12.951 "traddr": "10.0.0.1", 00:19:12.951 "trsvcid": "41886" 00:19:12.951 }, 00:19:12.951 "auth": { 00:19:12.951 "state": "completed", 00:19:12.951 "digest": "sha384", 00:19:12.951 "dhgroup": "ffdhe3072" 00:19:12.951 } 00:19:12.951 } 00:19:12.951 ]' 00:19:12.951 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:13.209 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:13.209 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:13.209 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:13.209 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:13.209 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:13.209 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:13.209 06:29:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.467 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:13.467 06:29:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:14.401 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:14.401 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:14.401 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:14.401 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.401 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.401 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.401 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:14.401 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:14.401 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:14.661 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:15.228 00:19:15.228 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.228 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.228 06:29:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:15.487 { 00:19:15.487 "cntlid": 71, 00:19:15.487 "qid": 0, 00:19:15.487 "state": "enabled", 00:19:15.487 "thread": "nvmf_tgt_poll_group_000", 00:19:15.487 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:15.487 "listen_address": { 00:19:15.487 "trtype": "TCP", 00:19:15.487 "adrfam": "IPv4", 00:19:15.487 "traddr": "10.0.0.2", 00:19:15.487 "trsvcid": "4420" 00:19:15.487 }, 00:19:15.487 "peer_address": { 00:19:15.487 "trtype": "TCP", 00:19:15.487 "adrfam": "IPv4", 00:19:15.487 "traddr": "10.0.0.1", 00:19:15.487 "trsvcid": "46458" 00:19:15.487 }, 00:19:15.487 "auth": { 00:19:15.487 "state": "completed", 00:19:15.487 "digest": "sha384", 00:19:15.487 "dhgroup": "ffdhe3072" 00:19:15.487 } 00:19:15.487 } 00:19:15.487 ]' 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:15.487 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:15.744 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:15.745 06:29:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:16.679 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.679 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:16.679 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.679 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.679 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.679 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:16.679 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.679 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:16.679 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:16.937 06:29:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:17.507 00:19:17.507 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.508 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.508 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.769 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.769 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.769 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:17.769 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.769 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:17.769 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.769 { 00:19:17.769 "cntlid": 73, 00:19:17.769 "qid": 0, 00:19:17.769 "state": "enabled", 00:19:17.769 "thread": "nvmf_tgt_poll_group_000", 00:19:17.770 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:17.770 "listen_address": { 00:19:17.770 "trtype": "TCP", 00:19:17.770 "adrfam": "IPv4", 00:19:17.770 "traddr": "10.0.0.2", 00:19:17.770 "trsvcid": "4420" 00:19:17.770 }, 00:19:17.770 "peer_address": { 00:19:17.770 "trtype": "TCP", 00:19:17.770 "adrfam": "IPv4", 00:19:17.770 "traddr": "10.0.0.1", 00:19:17.770 "trsvcid": "46474" 00:19:17.770 }, 00:19:17.770 "auth": { 00:19:17.770 "state": "completed", 00:19:17.770 "digest": "sha384", 00:19:17.770 "dhgroup": "ffdhe4096" 00:19:17.770 } 00:19:17.770 } 00:19:17.770 ]' 00:19:17.770 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.770 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.770 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.770 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:17.770 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.770 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.770 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.770 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.027 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:18.028 06:29:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:18.960 06:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:19.218 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:19.218 06:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:19.218 06:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.218 06:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.218 06:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.218 06:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:19.218 06:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:19.218 06:29:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.476 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:19.733 00:19:19.733 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.733 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.733 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.991 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.991 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.991 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:19.991 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.991 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:19.991 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.991 { 00:19:19.991 "cntlid": 75, 00:19:19.991 "qid": 0, 00:19:19.991 "state": "enabled", 00:19:19.991 "thread": "nvmf_tgt_poll_group_000", 00:19:19.991 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:19.991 "listen_address": { 00:19:19.991 "trtype": "TCP", 00:19:19.991 "adrfam": "IPv4", 00:19:19.991 "traddr": "10.0.0.2", 00:19:19.991 "trsvcid": "4420" 00:19:19.991 }, 00:19:19.991 "peer_address": { 00:19:19.991 "trtype": "TCP", 00:19:19.991 "adrfam": "IPv4", 00:19:19.991 "traddr": "10.0.0.1", 00:19:19.991 "trsvcid": "46508" 00:19:19.991 }, 00:19:19.991 "auth": { 00:19:19.991 "state": "completed", 00:19:19.991 "digest": "sha384", 00:19:19.991 "dhgroup": "ffdhe4096" 00:19:19.991 } 00:19:19.991 } 00:19:19.991 ]' 00:19:19.991 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.991 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.991 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:20.249 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:20.249 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:20.249 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:20.249 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:20.249 06:29:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:20.507 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:20.507 06:29:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:21.441 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:21.441 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:21.441 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:21.441 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.441 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.441 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.441 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:21.441 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.441 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.699 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:21.957 00:19:21.957 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.957 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.957 06:29:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.215 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.215 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.215 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:22.215 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.215 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:22.215 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:22.215 { 00:19:22.215 "cntlid": 77, 00:19:22.215 "qid": 0, 00:19:22.215 "state": "enabled", 00:19:22.215 "thread": "nvmf_tgt_poll_group_000", 00:19:22.215 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:22.215 "listen_address": { 00:19:22.215 "trtype": "TCP", 00:19:22.215 "adrfam": "IPv4", 00:19:22.215 "traddr": "10.0.0.2", 00:19:22.215 "trsvcid": "4420" 00:19:22.215 }, 00:19:22.215 "peer_address": { 00:19:22.215 "trtype": "TCP", 00:19:22.215 "adrfam": "IPv4", 00:19:22.215 "traddr": "10.0.0.1", 00:19:22.215 "trsvcid": "46524" 00:19:22.215 }, 00:19:22.215 "auth": { 00:19:22.215 "state": "completed", 00:19:22.215 "digest": "sha384", 00:19:22.215 "dhgroup": "ffdhe4096" 00:19:22.215 } 00:19:22.215 } 00:19:22.215 ]' 00:19:22.215 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:22.473 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:22.473 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:22.473 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:22.473 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:22.473 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.473 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.473 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:22.731 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:22.731 06:29:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:23.664 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.664 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:23.664 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.664 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.664 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.664 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.664 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.664 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:23.922 06:29:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.488 00:19:24.488 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.488 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.488 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.746 { 00:19:24.746 "cntlid": 79, 00:19:24.746 "qid": 0, 00:19:24.746 "state": "enabled", 00:19:24.746 "thread": "nvmf_tgt_poll_group_000", 00:19:24.746 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:24.746 "listen_address": { 00:19:24.746 "trtype": "TCP", 00:19:24.746 "adrfam": "IPv4", 00:19:24.746 "traddr": "10.0.0.2", 00:19:24.746 "trsvcid": "4420" 00:19:24.746 }, 00:19:24.746 "peer_address": { 00:19:24.746 "trtype": "TCP", 00:19:24.746 "adrfam": "IPv4", 00:19:24.746 "traddr": "10.0.0.1", 00:19:24.746 "trsvcid": "46560" 00:19:24.746 }, 00:19:24.746 "auth": { 00:19:24.746 "state": "completed", 00:19:24.746 "digest": "sha384", 00:19:24.746 "dhgroup": "ffdhe4096" 00:19:24.746 } 00:19:24.746 } 00:19:24.746 ]' 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:24.746 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.003 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:25.003 06:29:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:25.935 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.935 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.935 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:25.935 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.935 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.935 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.935 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.935 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.935 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:25.935 06:29:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.193 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.759 00:19:26.759 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.759 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.759 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.017 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.017 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.017 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:27.017 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.017 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:27.017 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.017 { 00:19:27.017 "cntlid": 81, 00:19:27.017 "qid": 0, 00:19:27.017 "state": "enabled", 00:19:27.017 "thread": "nvmf_tgt_poll_group_000", 00:19:27.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:27.017 "listen_address": { 00:19:27.017 "trtype": "TCP", 00:19:27.017 "adrfam": "IPv4", 00:19:27.017 "traddr": "10.0.0.2", 00:19:27.017 "trsvcid": "4420" 00:19:27.017 }, 00:19:27.017 "peer_address": { 00:19:27.017 "trtype": "TCP", 00:19:27.017 "adrfam": "IPv4", 00:19:27.017 "traddr": "10.0.0.1", 00:19:27.017 "trsvcid": "59348" 00:19:27.017 }, 00:19:27.017 "auth": { 00:19:27.017 "state": "completed", 00:19:27.017 "digest": "sha384", 00:19:27.017 "dhgroup": "ffdhe6144" 00:19:27.017 } 00:19:27.017 } 00:19:27.017 ]' 00:19:27.017 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.275 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:27.275 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.275 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:27.275 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.275 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.275 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.275 06:29:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.532 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:27.532 06:29:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:28.465 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.465 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.465 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:28.465 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.465 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.465 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.465 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.465 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.465 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:28.723 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:29.289 00:19:29.289 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.289 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.289 06:30:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.547 { 00:19:29.547 "cntlid": 83, 00:19:29.547 "qid": 0, 00:19:29.547 "state": "enabled", 00:19:29.547 "thread": "nvmf_tgt_poll_group_000", 00:19:29.547 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:29.547 "listen_address": { 00:19:29.547 "trtype": "TCP", 00:19:29.547 "adrfam": "IPv4", 00:19:29.547 "traddr": "10.0.0.2", 00:19:29.547 "trsvcid": "4420" 00:19:29.547 }, 00:19:29.547 "peer_address": { 00:19:29.547 "trtype": "TCP", 00:19:29.547 "adrfam": "IPv4", 00:19:29.547 "traddr": "10.0.0.1", 00:19:29.547 "trsvcid": "59382" 00:19:29.547 }, 00:19:29.547 "auth": { 00:19:29.547 "state": "completed", 00:19:29.547 "digest": "sha384", 00:19:29.547 "dhgroup": "ffdhe6144" 00:19:29.547 } 00:19:29.547 } 00:19:29.547 ]' 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.547 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:30.112 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:30.112 06:30:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:30.710 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.710 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.710 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:30.710 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.710 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.710 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.710 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.710 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.710 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:30.993 06:30:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:31.560 00:19:31.560 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:31.560 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:31.560 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.817 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.817 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.817 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:31.817 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.075 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:32.075 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.075 { 00:19:32.075 "cntlid": 85, 00:19:32.075 "qid": 0, 00:19:32.075 "state": "enabled", 00:19:32.075 "thread": "nvmf_tgt_poll_group_000", 00:19:32.075 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:32.075 "listen_address": { 00:19:32.075 "trtype": "TCP", 00:19:32.075 "adrfam": "IPv4", 00:19:32.075 "traddr": "10.0.0.2", 00:19:32.075 "trsvcid": "4420" 00:19:32.075 }, 00:19:32.075 "peer_address": { 00:19:32.075 "trtype": "TCP", 00:19:32.075 "adrfam": "IPv4", 00:19:32.075 "traddr": "10.0.0.1", 00:19:32.075 "trsvcid": "59408" 00:19:32.075 }, 00:19:32.075 "auth": { 00:19:32.075 "state": "completed", 00:19:32.075 "digest": "sha384", 00:19:32.075 "dhgroup": "ffdhe6144" 00:19:32.075 } 00:19:32.075 } 00:19:32.075 ]' 00:19:32.075 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.075 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:32.075 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.075 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:32.075 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.075 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.075 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.075 06:30:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.333 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:32.333 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:33.267 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.267 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.267 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:33.267 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.267 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.267 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.267 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.267 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.267 06:30:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:33.527 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:34.092 00:19:34.092 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:34.092 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:34.092 06:30:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.351 { 00:19:34.351 "cntlid": 87, 00:19:34.351 "qid": 0, 00:19:34.351 "state": "enabled", 00:19:34.351 "thread": "nvmf_tgt_poll_group_000", 00:19:34.351 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:34.351 "listen_address": { 00:19:34.351 "trtype": "TCP", 00:19:34.351 "adrfam": "IPv4", 00:19:34.351 "traddr": "10.0.0.2", 00:19:34.351 "trsvcid": "4420" 00:19:34.351 }, 00:19:34.351 "peer_address": { 00:19:34.351 "trtype": "TCP", 00:19:34.351 "adrfam": "IPv4", 00:19:34.351 "traddr": "10.0.0.1", 00:19:34.351 "trsvcid": "59432" 00:19:34.351 }, 00:19:34.351 "auth": { 00:19:34.351 "state": "completed", 00:19:34.351 "digest": "sha384", 00:19:34.351 "dhgroup": "ffdhe6144" 00:19:34.351 } 00:19:34.351 } 00:19:34.351 ]' 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:34.351 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.609 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.609 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.609 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.867 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:34.867 06:30:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:35.801 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.801 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.801 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:35.801 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:35.801 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.801 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:35.801 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:35.801 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.801 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:35.801 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.058 06:30:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:36.991 00:19:36.991 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:36.992 { 00:19:36.992 "cntlid": 89, 00:19:36.992 "qid": 0, 00:19:36.992 "state": "enabled", 00:19:36.992 "thread": "nvmf_tgt_poll_group_000", 00:19:36.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:36.992 "listen_address": { 00:19:36.992 "trtype": "TCP", 00:19:36.992 "adrfam": "IPv4", 00:19:36.992 "traddr": "10.0.0.2", 00:19:36.992 "trsvcid": "4420" 00:19:36.992 }, 00:19:36.992 "peer_address": { 00:19:36.992 "trtype": "TCP", 00:19:36.992 "adrfam": "IPv4", 00:19:36.992 "traddr": "10.0.0.1", 00:19:36.992 "trsvcid": "53794" 00:19:36.992 }, 00:19:36.992 "auth": { 00:19:36.992 "state": "completed", 00:19:36.992 "digest": "sha384", 00:19:36.992 "dhgroup": "ffdhe8192" 00:19:36.992 } 00:19:36.992 } 00:19:36.992 ]' 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:36.992 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.249 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:37.249 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.250 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.250 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.250 06:30:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.508 06:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:37.508 06:30:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:38.442 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.442 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:38.442 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.442 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.442 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.442 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.442 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:38.442 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:38.700 06:30:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.633 00:19:39.633 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:39.634 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:39.634 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:39.892 { 00:19:39.892 "cntlid": 91, 00:19:39.892 "qid": 0, 00:19:39.892 "state": "enabled", 00:19:39.892 "thread": "nvmf_tgt_poll_group_000", 00:19:39.892 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:39.892 "listen_address": { 00:19:39.892 "trtype": "TCP", 00:19:39.892 "adrfam": "IPv4", 00:19:39.892 "traddr": "10.0.0.2", 00:19:39.892 "trsvcid": "4420" 00:19:39.892 }, 00:19:39.892 "peer_address": { 00:19:39.892 "trtype": "TCP", 00:19:39.892 "adrfam": "IPv4", 00:19:39.892 "traddr": "10.0.0.1", 00:19:39.892 "trsvcid": "53830" 00:19:39.892 }, 00:19:39.892 "auth": { 00:19:39.892 "state": "completed", 00:19:39.892 "digest": "sha384", 00:19:39.892 "dhgroup": "ffdhe8192" 00:19:39.892 } 00:19:39.892 } 00:19:39.892 ]' 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.892 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.151 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:40.151 06:30:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:41.084 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.084 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:41.084 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.084 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.084 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.084 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.084 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:41.084 06:30:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.342 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:42.278 00:19:42.278 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:42.278 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.278 06:30:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:42.535 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.536 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.536 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.536 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.536 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.536 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.536 { 00:19:42.536 "cntlid": 93, 00:19:42.536 "qid": 0, 00:19:42.536 "state": "enabled", 00:19:42.536 "thread": "nvmf_tgt_poll_group_000", 00:19:42.536 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:42.536 "listen_address": { 00:19:42.536 "trtype": "TCP", 00:19:42.536 "adrfam": "IPv4", 00:19:42.536 "traddr": "10.0.0.2", 00:19:42.536 "trsvcid": "4420" 00:19:42.536 }, 00:19:42.536 "peer_address": { 00:19:42.536 "trtype": "TCP", 00:19:42.536 "adrfam": "IPv4", 00:19:42.536 "traddr": "10.0.0.1", 00:19:42.536 "trsvcid": "53866" 00:19:42.536 }, 00:19:42.536 "auth": { 00:19:42.536 "state": "completed", 00:19:42.536 "digest": "sha384", 00:19:42.536 "dhgroup": "ffdhe8192" 00:19:42.536 } 00:19:42.536 } 00:19:42.536 ]' 00:19:42.536 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.536 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:42.536 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.536 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:42.536 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.794 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.794 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.794 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.052 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:43.052 06:30:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:44.048 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.048 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.048 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:44.048 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.048 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.048 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.048 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.048 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.048 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.306 06:30:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:44.869 00:19:45.126 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.126 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.126 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.383 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.383 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.383 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:45.383 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.383 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:45.383 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.383 { 00:19:45.383 "cntlid": 95, 00:19:45.383 "qid": 0, 00:19:45.383 "state": "enabled", 00:19:45.383 "thread": "nvmf_tgt_poll_group_000", 00:19:45.383 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:45.383 "listen_address": { 00:19:45.383 "trtype": "TCP", 00:19:45.383 "adrfam": "IPv4", 00:19:45.383 "traddr": "10.0.0.2", 00:19:45.383 "trsvcid": "4420" 00:19:45.383 }, 00:19:45.383 "peer_address": { 00:19:45.383 "trtype": "TCP", 00:19:45.383 "adrfam": "IPv4", 00:19:45.383 "traddr": "10.0.0.1", 00:19:45.383 "trsvcid": "56640" 00:19:45.383 }, 00:19:45.383 "auth": { 00:19:45.383 "state": "completed", 00:19:45.383 "digest": "sha384", 00:19:45.383 "dhgroup": "ffdhe8192" 00:19:45.383 } 00:19:45.383 } 00:19:45.383 ]' 00:19:45.383 06:30:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.383 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:45.383 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.383 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:45.383 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.383 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.383 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.383 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.642 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:45.642 06:30:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:46.575 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.575 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.575 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:46.575 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.575 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.575 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.575 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:46.575 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:46.575 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.575 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:46.575 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:46.833 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:47.091 00:19:47.348 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:47.348 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.348 06:30:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.606 { 00:19:47.606 "cntlid": 97, 00:19:47.606 "qid": 0, 00:19:47.606 "state": "enabled", 00:19:47.606 "thread": "nvmf_tgt_poll_group_000", 00:19:47.606 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:47.606 "listen_address": { 00:19:47.606 "trtype": "TCP", 00:19:47.606 "adrfam": "IPv4", 00:19:47.606 "traddr": "10.0.0.2", 00:19:47.606 "trsvcid": "4420" 00:19:47.606 }, 00:19:47.606 "peer_address": { 00:19:47.606 "trtype": "TCP", 00:19:47.606 "adrfam": "IPv4", 00:19:47.606 "traddr": "10.0.0.1", 00:19:47.606 "trsvcid": "56672" 00:19:47.606 }, 00:19:47.606 "auth": { 00:19:47.606 "state": "completed", 00:19:47.606 "digest": "sha512", 00:19:47.606 "dhgroup": "null" 00:19:47.606 } 00:19:47.606 } 00:19:47.606 ]' 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.606 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.864 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:47.864 06:30:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:48.798 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.799 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.799 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:48.799 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:48.799 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.799 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:48.799 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:48.799 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:48.799 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.057 06:30:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:49.315 00:19:49.573 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.573 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.573 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:49.831 { 00:19:49.831 "cntlid": 99, 00:19:49.831 "qid": 0, 00:19:49.831 "state": "enabled", 00:19:49.831 "thread": "nvmf_tgt_poll_group_000", 00:19:49.831 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:49.831 "listen_address": { 00:19:49.831 "trtype": "TCP", 00:19:49.831 "adrfam": "IPv4", 00:19:49.831 "traddr": "10.0.0.2", 00:19:49.831 "trsvcid": "4420" 00:19:49.831 }, 00:19:49.831 "peer_address": { 00:19:49.831 "trtype": "TCP", 00:19:49.831 "adrfam": "IPv4", 00:19:49.831 "traddr": "10.0.0.1", 00:19:49.831 "trsvcid": "56714" 00:19:49.831 }, 00:19:49.831 "auth": { 00:19:49.831 "state": "completed", 00:19:49.831 "digest": "sha512", 00:19:49.831 "dhgroup": "null" 00:19:49.831 } 00:19:49.831 } 00:19:49.831 ]' 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.831 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.089 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:50.089 06:30:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:51.022 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.022 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:51.022 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.022 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.022 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.022 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.022 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.022 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:51.280 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:51.280 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.280 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:51.280 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:51.281 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:51.281 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.281 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.281 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.281 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.281 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.281 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.281 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.281 06:30:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:51.539 00:19:51.539 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.539 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.539 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.797 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.797 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.797 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:51.797 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.797 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:51.797 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.797 { 00:19:51.797 "cntlid": 101, 00:19:51.797 "qid": 0, 00:19:51.797 "state": "enabled", 00:19:51.797 "thread": "nvmf_tgt_poll_group_000", 00:19:51.797 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:51.797 "listen_address": { 00:19:51.797 "trtype": "TCP", 00:19:51.797 "adrfam": "IPv4", 00:19:51.797 "traddr": "10.0.0.2", 00:19:51.797 "trsvcid": "4420" 00:19:51.797 }, 00:19:51.797 "peer_address": { 00:19:51.797 "trtype": "TCP", 00:19:51.797 "adrfam": "IPv4", 00:19:51.797 "traddr": "10.0.0.1", 00:19:51.797 "trsvcid": "56744" 00:19:51.797 }, 00:19:51.797 "auth": { 00:19:51.797 "state": "completed", 00:19:51.797 "digest": "sha512", 00:19:51.797 "dhgroup": "null" 00:19:51.797 } 00:19:51.797 } 00:19:51.797 ]' 00:19:51.797 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.797 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.797 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:52.055 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:52.055 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:52.055 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:52.055 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:52.055 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.313 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:52.313 06:30:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:19:53.245 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.245 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.245 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:53.245 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.245 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.245 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.245 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:53.245 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.245 06:30:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:53.502 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:53.503 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.503 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:53.760 00:19:53.760 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.760 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.760 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.017 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.017 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.017 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:54.017 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.017 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:54.017 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:54.017 { 00:19:54.017 "cntlid": 103, 00:19:54.017 "qid": 0, 00:19:54.017 "state": "enabled", 00:19:54.017 "thread": "nvmf_tgt_poll_group_000", 00:19:54.017 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:54.017 "listen_address": { 00:19:54.017 "trtype": "TCP", 00:19:54.017 "adrfam": "IPv4", 00:19:54.017 "traddr": "10.0.0.2", 00:19:54.017 "trsvcid": "4420" 00:19:54.017 }, 00:19:54.017 "peer_address": { 00:19:54.017 "trtype": "TCP", 00:19:54.017 "adrfam": "IPv4", 00:19:54.017 "traddr": "10.0.0.1", 00:19:54.017 "trsvcid": "56760" 00:19:54.017 }, 00:19:54.017 "auth": { 00:19:54.017 "state": "completed", 00:19:54.017 "digest": "sha512", 00:19:54.017 "dhgroup": "null" 00:19:54.017 } 00:19:54.017 } 00:19:54.017 ]' 00:19:54.017 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:54.017 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:54.017 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:54.275 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:54.275 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:54.275 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.275 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.275 06:30:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.533 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:54.533 06:30:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:19:55.464 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.464 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.464 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:55.464 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.464 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.464 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.464 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:55.464 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:55.464 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:55.464 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:55.723 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:56.289 00:19:56.289 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.289 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.289 06:30:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.289 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.289 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.289 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:56.289 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.289 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:56.289 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.289 { 00:19:56.289 "cntlid": 105, 00:19:56.289 "qid": 0, 00:19:56.289 "state": "enabled", 00:19:56.289 "thread": "nvmf_tgt_poll_group_000", 00:19:56.289 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:56.289 "listen_address": { 00:19:56.289 "trtype": "TCP", 00:19:56.289 "adrfam": "IPv4", 00:19:56.289 "traddr": "10.0.0.2", 00:19:56.289 "trsvcid": "4420" 00:19:56.289 }, 00:19:56.289 "peer_address": { 00:19:56.289 "trtype": "TCP", 00:19:56.289 "adrfam": "IPv4", 00:19:56.289 "traddr": "10.0.0.1", 00:19:56.289 "trsvcid": "47602" 00:19:56.289 }, 00:19:56.289 "auth": { 00:19:56.289 "state": "completed", 00:19:56.289 "digest": "sha512", 00:19:56.289 "dhgroup": "ffdhe2048" 00:19:56.289 } 00:19:56.289 } 00:19:56.289 ]' 00:19:56.289 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.546 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.546 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.546 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:56.546 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.546 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.546 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.546 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.804 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:56.804 06:30:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:19:57.737 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.737 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.737 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:57.737 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.737 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.737 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.738 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.738 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.738 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:57.995 06:30:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:58.254 00:19:58.254 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.254 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.512 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.769 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.769 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.769 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:58.769 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.769 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:58.769 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.769 { 00:19:58.769 "cntlid": 107, 00:19:58.769 "qid": 0, 00:19:58.769 "state": "enabled", 00:19:58.769 "thread": "nvmf_tgt_poll_group_000", 00:19:58.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:19:58.769 "listen_address": { 00:19:58.769 "trtype": "TCP", 00:19:58.769 "adrfam": "IPv4", 00:19:58.769 "traddr": "10.0.0.2", 00:19:58.769 "trsvcid": "4420" 00:19:58.770 }, 00:19:58.770 "peer_address": { 00:19:58.770 "trtype": "TCP", 00:19:58.770 "adrfam": "IPv4", 00:19:58.770 "traddr": "10.0.0.1", 00:19:58.770 "trsvcid": "47628" 00:19:58.770 }, 00:19:58.770 "auth": { 00:19:58.770 "state": "completed", 00:19:58.770 "digest": "sha512", 00:19:58.770 "dhgroup": "ffdhe2048" 00:19:58.770 } 00:19:58.770 } 00:19:58.770 ]' 00:19:58.770 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.770 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.770 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.770 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:58.770 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.770 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.770 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.770 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.027 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:59.027 06:30:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:19:59.982 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.982 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.982 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:19:59.982 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:59.982 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.982 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:59.982 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.982 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:59.982 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.240 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.241 06:30:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:00.537 00:20:00.823 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.823 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.823 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.823 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.823 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.823 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.823 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.823 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.823 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.823 { 00:20:00.823 "cntlid": 109, 00:20:00.823 "qid": 0, 00:20:00.823 "state": "enabled", 00:20:00.823 "thread": "nvmf_tgt_poll_group_000", 00:20:00.823 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:00.823 "listen_address": { 00:20:00.823 "trtype": "TCP", 00:20:00.823 "adrfam": "IPv4", 00:20:00.823 "traddr": "10.0.0.2", 00:20:00.823 "trsvcid": "4420" 00:20:00.823 }, 00:20:00.823 "peer_address": { 00:20:00.823 "trtype": "TCP", 00:20:00.823 "adrfam": "IPv4", 00:20:00.823 "traddr": "10.0.0.1", 00:20:00.823 "trsvcid": "47662" 00:20:00.823 }, 00:20:00.823 "auth": { 00:20:00.823 "state": "completed", 00:20:00.823 "digest": "sha512", 00:20:00.823 "dhgroup": "ffdhe2048" 00:20:00.823 } 00:20:00.823 } 00:20:00.823 ]' 00:20:00.823 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:01.082 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:01.082 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:01.082 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:01.082 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:01.082 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.082 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.082 06:30:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.341 06:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:20:01.341 06:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:20:02.316 06:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.316 06:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:02.316 06:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.316 06:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.316 06:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.316 06:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:02.316 06:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.316 06:30:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.575 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:02.832 00:20:03.090 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.090 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:03.090 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.348 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.348 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.348 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:03.348 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.348 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:03.348 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:03.348 { 00:20:03.348 "cntlid": 111, 00:20:03.348 "qid": 0, 00:20:03.348 "state": "enabled", 00:20:03.348 "thread": "nvmf_tgt_poll_group_000", 00:20:03.348 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:03.348 "listen_address": { 00:20:03.348 "trtype": "TCP", 00:20:03.348 "adrfam": "IPv4", 00:20:03.348 "traddr": "10.0.0.2", 00:20:03.348 "trsvcid": "4420" 00:20:03.348 }, 00:20:03.348 "peer_address": { 00:20:03.348 "trtype": "TCP", 00:20:03.348 "adrfam": "IPv4", 00:20:03.348 "traddr": "10.0.0.1", 00:20:03.348 "trsvcid": "47688" 00:20:03.348 }, 00:20:03.348 "auth": { 00:20:03.348 "state": "completed", 00:20:03.348 "digest": "sha512", 00:20:03.348 "dhgroup": "ffdhe2048" 00:20:03.348 } 00:20:03.348 } 00:20:03.348 ]' 00:20:03.348 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:03.348 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:03.348 06:30:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:03.348 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:03.348 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:03.348 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.348 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:03.348 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:03.606 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:03.606 06:30:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:04.540 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.540 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:04.540 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.540 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.540 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.540 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:04.540 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:04.540 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.540 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:04.797 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:20:04.797 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:04.797 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:04.797 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:04.798 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:04.798 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:04.798 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.798 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:04.798 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.798 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:04.798 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.798 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:04.798 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.363 00:20:05.363 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.363 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.363 06:30:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.621 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.621 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.621 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:05.621 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.621 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:05.621 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.621 { 00:20:05.621 "cntlid": 113, 00:20:05.621 "qid": 0, 00:20:05.621 "state": "enabled", 00:20:05.621 "thread": "nvmf_tgt_poll_group_000", 00:20:05.621 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:05.621 "listen_address": { 00:20:05.621 "trtype": "TCP", 00:20:05.622 "adrfam": "IPv4", 00:20:05.622 "traddr": "10.0.0.2", 00:20:05.622 "trsvcid": "4420" 00:20:05.622 }, 00:20:05.622 "peer_address": { 00:20:05.622 "trtype": "TCP", 00:20:05.622 "adrfam": "IPv4", 00:20:05.622 "traddr": "10.0.0.1", 00:20:05.622 "trsvcid": "43848" 00:20:05.622 }, 00:20:05.622 "auth": { 00:20:05.622 "state": "completed", 00:20:05.622 "digest": "sha512", 00:20:05.622 "dhgroup": "ffdhe3072" 00:20:05.622 } 00:20:05.622 } 00:20:05.622 ]' 00:20:05.622 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.622 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.622 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.622 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:05.622 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.622 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.622 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.622 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:05.879 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:20:05.879 06:30:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:20:06.811 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.811 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.811 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:06.811 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:06.811 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.811 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:06.811 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:06.811 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:06.811 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.069 06:30:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.633 00:20:07.633 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:07.633 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:07.633 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:07.633 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:07.633 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:07.633 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.633 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.891 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.891 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:07.891 { 00:20:07.891 "cntlid": 115, 00:20:07.891 "qid": 0, 00:20:07.891 "state": "enabled", 00:20:07.891 "thread": "nvmf_tgt_poll_group_000", 00:20:07.891 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:07.891 "listen_address": { 00:20:07.891 "trtype": "TCP", 00:20:07.891 "adrfam": "IPv4", 00:20:07.891 "traddr": "10.0.0.2", 00:20:07.891 "trsvcid": "4420" 00:20:07.891 }, 00:20:07.891 "peer_address": { 00:20:07.891 "trtype": "TCP", 00:20:07.891 "adrfam": "IPv4", 00:20:07.891 "traddr": "10.0.0.1", 00:20:07.891 "trsvcid": "43876" 00:20:07.891 }, 00:20:07.891 "auth": { 00:20:07.891 "state": "completed", 00:20:07.891 "digest": "sha512", 00:20:07.891 "dhgroup": "ffdhe3072" 00:20:07.891 } 00:20:07.891 } 00:20:07.891 ]' 00:20:07.891 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:07.891 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:07.891 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:07.891 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:07.891 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:07.891 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:07.891 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:07.891 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:08.150 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:20:08.150 06:30:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:20:09.085 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:09.085 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:09.085 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:09.085 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.085 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.085 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.085 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:09.085 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.085 06:30:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.343 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:09.601 00:20:09.601 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.859 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.859 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:10.116 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:10.116 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:10.116 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:10.116 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.116 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:10.116 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:10.116 { 00:20:10.116 "cntlid": 117, 00:20:10.116 "qid": 0, 00:20:10.116 "state": "enabled", 00:20:10.116 "thread": "nvmf_tgt_poll_group_000", 00:20:10.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:10.116 "listen_address": { 00:20:10.116 "trtype": "TCP", 00:20:10.116 "adrfam": "IPv4", 00:20:10.116 "traddr": "10.0.0.2", 00:20:10.117 "trsvcid": "4420" 00:20:10.117 }, 00:20:10.117 "peer_address": { 00:20:10.117 "trtype": "TCP", 00:20:10.117 "adrfam": "IPv4", 00:20:10.117 "traddr": "10.0.0.1", 00:20:10.117 "trsvcid": "43896" 00:20:10.117 }, 00:20:10.117 "auth": { 00:20:10.117 "state": "completed", 00:20:10.117 "digest": "sha512", 00:20:10.117 "dhgroup": "ffdhe3072" 00:20:10.117 } 00:20:10.117 } 00:20:10.117 ]' 00:20:10.117 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.117 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.117 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.117 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:10.117 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.117 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.117 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.117 06:30:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.374 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:20:10.374 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:20:11.307 06:30:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:11.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:11.307 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:11.307 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.307 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.307 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.307 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:11.307 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.307 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:11.565 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.566 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.566 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.823 00:20:12.083 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:12.083 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:12.083 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.341 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.341 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:12.341 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:12.341 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.341 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:12.341 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:12.341 { 00:20:12.341 "cntlid": 119, 00:20:12.341 "qid": 0, 00:20:12.341 "state": "enabled", 00:20:12.341 "thread": "nvmf_tgt_poll_group_000", 00:20:12.341 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:12.341 "listen_address": { 00:20:12.341 "trtype": "TCP", 00:20:12.341 "adrfam": "IPv4", 00:20:12.341 "traddr": "10.0.0.2", 00:20:12.341 "trsvcid": "4420" 00:20:12.341 }, 00:20:12.341 "peer_address": { 00:20:12.341 "trtype": "TCP", 00:20:12.341 "adrfam": "IPv4", 00:20:12.341 "traddr": "10.0.0.1", 00:20:12.341 "trsvcid": "43912" 00:20:12.341 }, 00:20:12.341 "auth": { 00:20:12.342 "state": "completed", 00:20:12.342 "digest": "sha512", 00:20:12.342 "dhgroup": "ffdhe3072" 00:20:12.342 } 00:20:12.342 } 00:20:12.342 ]' 00:20:12.342 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:12.342 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:12.342 06:30:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:12.342 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:12.342 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:12.342 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:12.342 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.342 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.600 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:12.600 06:30:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:13.534 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.534 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.534 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:13.534 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.534 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.534 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.534 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:13.534 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:13.534 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:13.534 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:13.792 06:30:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:14.358 00:20:14.358 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:14.358 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.358 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:14.615 { 00:20:14.615 "cntlid": 121, 00:20:14.615 "qid": 0, 00:20:14.615 "state": "enabled", 00:20:14.615 "thread": "nvmf_tgt_poll_group_000", 00:20:14.615 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:14.615 "listen_address": { 00:20:14.615 "trtype": "TCP", 00:20:14.615 "adrfam": "IPv4", 00:20:14.615 "traddr": "10.0.0.2", 00:20:14.615 "trsvcid": "4420" 00:20:14.615 }, 00:20:14.615 "peer_address": { 00:20:14.615 "trtype": "TCP", 00:20:14.615 "adrfam": "IPv4", 00:20:14.615 "traddr": "10.0.0.1", 00:20:14.615 "trsvcid": "43932" 00:20:14.615 }, 00:20:14.615 "auth": { 00:20:14.615 "state": "completed", 00:20:14.615 "digest": "sha512", 00:20:14.615 "dhgroup": "ffdhe4096" 00:20:14.615 } 00:20:14.615 } 00:20:14.615 ]' 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:14.615 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:14.872 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.872 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.872 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.130 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:20:15.130 06:30:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:20:16.063 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.063 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.063 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:16.063 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.063 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.063 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.063 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.063 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.063 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:16.321 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:20:16.321 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.321 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:16.321 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:16.321 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:16.321 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.321 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.321 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.321 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.322 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.322 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.322 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.322 06:30:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:16.579 00:20:16.579 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.579 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.579 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.837 { 00:20:16.837 "cntlid": 123, 00:20:16.837 "qid": 0, 00:20:16.837 "state": "enabled", 00:20:16.837 "thread": "nvmf_tgt_poll_group_000", 00:20:16.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:16.837 "listen_address": { 00:20:16.837 "trtype": "TCP", 00:20:16.837 "adrfam": "IPv4", 00:20:16.837 "traddr": "10.0.0.2", 00:20:16.837 "trsvcid": "4420" 00:20:16.837 }, 00:20:16.837 "peer_address": { 00:20:16.837 "trtype": "TCP", 00:20:16.837 "adrfam": "IPv4", 00:20:16.837 "traddr": "10.0.0.1", 00:20:16.837 "trsvcid": "56960" 00:20:16.837 }, 00:20:16.837 "auth": { 00:20:16.837 "state": "completed", 00:20:16.837 "digest": "sha512", 00:20:16.837 "dhgroup": "ffdhe4096" 00:20:16.837 } 00:20:16.837 } 00:20:16.837 ]' 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:16.837 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:17.095 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.095 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.095 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.353 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:20:17.353 06:30:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:20:18.285 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.285 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:18.285 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.285 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.285 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.285 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:18.285 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.285 06:30:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:18.543 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:19.108 00:20:19.108 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.108 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.108 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.366 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.366 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.366 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:19.366 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.366 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:19.366 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:19.366 { 00:20:19.366 "cntlid": 125, 00:20:19.366 "qid": 0, 00:20:19.366 "state": "enabled", 00:20:19.366 "thread": "nvmf_tgt_poll_group_000", 00:20:19.366 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:19.366 "listen_address": { 00:20:19.366 "trtype": "TCP", 00:20:19.366 "adrfam": "IPv4", 00:20:19.366 "traddr": "10.0.0.2", 00:20:19.366 "trsvcid": "4420" 00:20:19.366 }, 00:20:19.366 "peer_address": { 00:20:19.366 "trtype": "TCP", 00:20:19.366 "adrfam": "IPv4", 00:20:19.366 "traddr": "10.0.0.1", 00:20:19.366 "trsvcid": "56990" 00:20:19.366 }, 00:20:19.366 "auth": { 00:20:19.366 "state": "completed", 00:20:19.366 "digest": "sha512", 00:20:19.366 "dhgroup": "ffdhe4096" 00:20:19.366 } 00:20:19.366 } 00:20:19.366 ]' 00:20:19.366 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:19.366 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:19.366 06:30:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:19.366 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:19.366 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:19.366 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.366 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.366 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:19.625 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:20:19.625 06:30:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:20:20.558 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.558 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:20.558 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.558 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.558 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.558 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.558 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.558 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:20:20.815 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:20.816 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:21.382 00:20:21.382 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.382 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.382 06:30:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.640 { 00:20:21.640 "cntlid": 127, 00:20:21.640 "qid": 0, 00:20:21.640 "state": "enabled", 00:20:21.640 "thread": "nvmf_tgt_poll_group_000", 00:20:21.640 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:21.640 "listen_address": { 00:20:21.640 "trtype": "TCP", 00:20:21.640 "adrfam": "IPv4", 00:20:21.640 "traddr": "10.0.0.2", 00:20:21.640 "trsvcid": "4420" 00:20:21.640 }, 00:20:21.640 "peer_address": { 00:20:21.640 "trtype": "TCP", 00:20:21.640 "adrfam": "IPv4", 00:20:21.640 "traddr": "10.0.0.1", 00:20:21.640 "trsvcid": "57020" 00:20:21.640 }, 00:20:21.640 "auth": { 00:20:21.640 "state": "completed", 00:20:21.640 "digest": "sha512", 00:20:21.640 "dhgroup": "ffdhe4096" 00:20:21.640 } 00:20:21.640 } 00:20:21.640 ]' 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.640 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.898 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:21.899 06:30:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:22.833 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.833 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:22.833 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:22.833 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.833 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:22.833 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:22.833 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.833 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:22.833 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.091 06:30:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:23.658 00:20:23.658 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:23.658 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:23.658 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.916 { 00:20:23.916 "cntlid": 129, 00:20:23.916 "qid": 0, 00:20:23.916 "state": "enabled", 00:20:23.916 "thread": "nvmf_tgt_poll_group_000", 00:20:23.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:23.916 "listen_address": { 00:20:23.916 "trtype": "TCP", 00:20:23.916 "adrfam": "IPv4", 00:20:23.916 "traddr": "10.0.0.2", 00:20:23.916 "trsvcid": "4420" 00:20:23.916 }, 00:20:23.916 "peer_address": { 00:20:23.916 "trtype": "TCP", 00:20:23.916 "adrfam": "IPv4", 00:20:23.916 "traddr": "10.0.0.1", 00:20:23.916 "trsvcid": "57058" 00:20:23.916 }, 00:20:23.916 "auth": { 00:20:23.916 "state": "completed", 00:20:23.916 "digest": "sha512", 00:20:23.916 "dhgroup": "ffdhe6144" 00:20:23.916 } 00:20:23.916 } 00:20:23.916 ]' 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:23.916 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.173 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.173 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.173 06:30:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.431 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:20:24.431 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:20:25.363 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.364 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:25.364 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.364 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.364 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.364 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.364 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:25.364 06:30:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.622 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.187 00:20:26.187 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.187 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.187 06:30:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.445 { 00:20:26.445 "cntlid": 131, 00:20:26.445 "qid": 0, 00:20:26.445 "state": "enabled", 00:20:26.445 "thread": "nvmf_tgt_poll_group_000", 00:20:26.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:26.445 "listen_address": { 00:20:26.445 "trtype": "TCP", 00:20:26.445 "adrfam": "IPv4", 00:20:26.445 "traddr": "10.0.0.2", 00:20:26.445 "trsvcid": "4420" 00:20:26.445 }, 00:20:26.445 "peer_address": { 00:20:26.445 "trtype": "TCP", 00:20:26.445 "adrfam": "IPv4", 00:20:26.445 "traddr": "10.0.0.1", 00:20:26.445 "trsvcid": "44766" 00:20:26.445 }, 00:20:26.445 "auth": { 00:20:26.445 "state": "completed", 00:20:26.445 "digest": "sha512", 00:20:26.445 "dhgroup": "ffdhe6144" 00:20:26.445 } 00:20:26.445 } 00:20:26.445 ]' 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.445 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:27.010 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:20:27.010 06:30:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:20:27.941 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.941 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:27.941 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.942 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.942 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.942 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.942 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:27.942 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.199 06:30:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:28.762 00:20:28.762 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:28.762 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.762 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.019 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.020 { 00:20:29.020 "cntlid": 133, 00:20:29.020 "qid": 0, 00:20:29.020 "state": "enabled", 00:20:29.020 "thread": "nvmf_tgt_poll_group_000", 00:20:29.020 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:29.020 "listen_address": { 00:20:29.020 "trtype": "TCP", 00:20:29.020 "adrfam": "IPv4", 00:20:29.020 "traddr": "10.0.0.2", 00:20:29.020 "trsvcid": "4420" 00:20:29.020 }, 00:20:29.020 "peer_address": { 00:20:29.020 "trtype": "TCP", 00:20:29.020 "adrfam": "IPv4", 00:20:29.020 "traddr": "10.0.0.1", 00:20:29.020 "trsvcid": "44800" 00:20:29.020 }, 00:20:29.020 "auth": { 00:20:29.020 "state": "completed", 00:20:29.020 "digest": "sha512", 00:20:29.020 "dhgroup": "ffdhe6144" 00:20:29.020 } 00:20:29.020 } 00:20:29.020 ]' 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.020 06:31:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.277 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:20:29.277 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:20:30.210 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.210 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.210 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:30.210 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.210 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.210 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.210 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.210 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:30.210 06:31:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:30.485 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:31.050 00:20:31.050 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:31.050 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:31.050 06:31:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.308 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.308 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.309 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:31.309 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.309 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:31.309 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.309 { 00:20:31.309 "cntlid": 135, 00:20:31.309 "qid": 0, 00:20:31.309 "state": "enabled", 00:20:31.309 "thread": "nvmf_tgt_poll_group_000", 00:20:31.309 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:31.309 "listen_address": { 00:20:31.309 "trtype": "TCP", 00:20:31.309 "adrfam": "IPv4", 00:20:31.309 "traddr": "10.0.0.2", 00:20:31.309 "trsvcid": "4420" 00:20:31.309 }, 00:20:31.309 "peer_address": { 00:20:31.309 "trtype": "TCP", 00:20:31.309 "adrfam": "IPv4", 00:20:31.309 "traddr": "10.0.0.1", 00:20:31.309 "trsvcid": "44832" 00:20:31.309 }, 00:20:31.309 "auth": { 00:20:31.309 "state": "completed", 00:20:31.309 "digest": "sha512", 00:20:31.309 "dhgroup": "ffdhe6144" 00:20:31.309 } 00:20:31.309 } 00:20:31.309 ]' 00:20:31.309 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.309 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:31.309 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.309 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:31.309 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.566 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.566 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.566 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.824 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:31.824 06:31:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.758 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:32.758 06:31:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:33.691 00:20:33.691 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:33.692 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:33.692 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:33.949 { 00:20:33.949 "cntlid": 137, 00:20:33.949 "qid": 0, 00:20:33.949 "state": "enabled", 00:20:33.949 "thread": "nvmf_tgt_poll_group_000", 00:20:33.949 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:33.949 "listen_address": { 00:20:33.949 "trtype": "TCP", 00:20:33.949 "adrfam": "IPv4", 00:20:33.949 "traddr": "10.0.0.2", 00:20:33.949 "trsvcid": "4420" 00:20:33.949 }, 00:20:33.949 "peer_address": { 00:20:33.949 "trtype": "TCP", 00:20:33.949 "adrfam": "IPv4", 00:20:33.949 "traddr": "10.0.0.1", 00:20:33.949 "trsvcid": "44864" 00:20:33.949 }, 00:20:33.949 "auth": { 00:20:33.949 "state": "completed", 00:20:33.949 "digest": "sha512", 00:20:33.949 "dhgroup": "ffdhe8192" 00:20:33.949 } 00:20:33.949 } 00:20:33.949 ]' 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:33.949 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.207 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.207 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.207 06:31:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.465 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:20:34.465 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:20:35.398 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.398 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:35.398 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.398 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.398 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.398 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.398 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:35.399 06:31:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:35.656 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:20:35.656 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.656 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:35.656 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:35.656 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:35.656 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.656 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.656 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:35.656 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.656 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:35.657 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.657 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:35.657 06:31:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:36.591 00:20:36.591 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:36.591 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:36.591 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:36.849 { 00:20:36.849 "cntlid": 139, 00:20:36.849 "qid": 0, 00:20:36.849 "state": "enabled", 00:20:36.849 "thread": "nvmf_tgt_poll_group_000", 00:20:36.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:36.849 "listen_address": { 00:20:36.849 "trtype": "TCP", 00:20:36.849 "adrfam": "IPv4", 00:20:36.849 "traddr": "10.0.0.2", 00:20:36.849 "trsvcid": "4420" 00:20:36.849 }, 00:20:36.849 "peer_address": { 00:20:36.849 "trtype": "TCP", 00:20:36.849 "adrfam": "IPv4", 00:20:36.849 "traddr": "10.0.0.1", 00:20:36.849 "trsvcid": "59892" 00:20:36.849 }, 00:20:36.849 "auth": { 00:20:36.849 "state": "completed", 00:20:36.849 "digest": "sha512", 00:20:36.849 "dhgroup": "ffdhe8192" 00:20:36.849 } 00:20:36.849 } 00:20:36.849 ]' 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:36.849 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.107 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:20:37.107 06:31:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: --dhchap-ctrl-secret DHHC-1:02:ZGExYmMyNjI5MDhlYmY5YjVhMmY1OGUyMzEyYTA0NDg5YWQxN2RiY2FmYWZiNDg0nDRQig==: 00:20:38.044 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.044 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.044 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:38.044 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.044 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.044 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.044 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.044 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.044 06:31:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:38.610 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:38.610 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.610 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:38.610 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:38.610 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:38.610 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.610 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.610 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:38.610 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.610 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:38.611 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.611 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:38.611 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:39.176 00:20:39.176 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:39.176 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:39.177 06:31:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.743 { 00:20:39.743 "cntlid": 141, 00:20:39.743 "qid": 0, 00:20:39.743 "state": "enabled", 00:20:39.743 "thread": "nvmf_tgt_poll_group_000", 00:20:39.743 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:39.743 "listen_address": { 00:20:39.743 "trtype": "TCP", 00:20:39.743 "adrfam": "IPv4", 00:20:39.743 "traddr": "10.0.0.2", 00:20:39.743 "trsvcid": "4420" 00:20:39.743 }, 00:20:39.743 "peer_address": { 00:20:39.743 "trtype": "TCP", 00:20:39.743 "adrfam": "IPv4", 00:20:39.743 "traddr": "10.0.0.1", 00:20:39.743 "trsvcid": "59920" 00:20:39.743 }, 00:20:39.743 "auth": { 00:20:39.743 "state": "completed", 00:20:39.743 "digest": "sha512", 00:20:39.743 "dhgroup": "ffdhe8192" 00:20:39.743 } 00:20:39.743 } 00:20:39.743 ]' 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.743 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:40.002 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:20:40.002 06:31:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:01:NDJiY2MxNzVmZmJhZDBkZTVlMjczMzQ1YWFlZDU0ZmUfGRx+: 00:20:40.934 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.934 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.934 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:40.934 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:40.934 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.934 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:40.934 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.934 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:40.934 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.191 06:31:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.124 00:20:42.124 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.124 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.124 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.124 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.124 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.124 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:42.124 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.124 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:42.124 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.124 { 00:20:42.124 "cntlid": 143, 00:20:42.124 "qid": 0, 00:20:42.124 "state": "enabled", 00:20:42.124 "thread": "nvmf_tgt_poll_group_000", 00:20:42.124 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:42.124 "listen_address": { 00:20:42.124 "trtype": "TCP", 00:20:42.124 "adrfam": "IPv4", 00:20:42.124 "traddr": "10.0.0.2", 00:20:42.124 "trsvcid": "4420" 00:20:42.124 }, 00:20:42.124 "peer_address": { 00:20:42.124 "trtype": "TCP", 00:20:42.124 "adrfam": "IPv4", 00:20:42.124 "traddr": "10.0.0.1", 00:20:42.124 "trsvcid": "59952" 00:20:42.124 }, 00:20:42.124 "auth": { 00:20:42.124 "state": "completed", 00:20:42.124 "digest": "sha512", 00:20:42.124 "dhgroup": "ffdhe8192" 00:20:42.124 } 00:20:42.124 } 00:20:42.124 ]' 00:20:42.124 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.382 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:42.382 06:31:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.382 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:42.382 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.382 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.382 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.383 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.640 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:42.641 06:31:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:43.574 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.574 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.574 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:43.574 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.574 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.575 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.575 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:43.575 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:43.575 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:43.575 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:43.575 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:43.575 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.832 06:31:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.767 00:20:44.767 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.767 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.767 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.025 { 00:20:45.025 "cntlid": 145, 00:20:45.025 "qid": 0, 00:20:45.025 "state": "enabled", 00:20:45.025 "thread": "nvmf_tgt_poll_group_000", 00:20:45.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:45.025 "listen_address": { 00:20:45.025 "trtype": "TCP", 00:20:45.025 "adrfam": "IPv4", 00:20:45.025 "traddr": "10.0.0.2", 00:20:45.025 "trsvcid": "4420" 00:20:45.025 }, 00:20:45.025 "peer_address": { 00:20:45.025 "trtype": "TCP", 00:20:45.025 "adrfam": "IPv4", 00:20:45.025 "traddr": "10.0.0.1", 00:20:45.025 "trsvcid": "59982" 00:20:45.025 }, 00:20:45.025 "auth": { 00:20:45.025 "state": "completed", 00:20:45.025 "digest": "sha512", 00:20:45.025 "dhgroup": "ffdhe8192" 00:20:45.025 } 00:20:45.025 } 00:20:45.025 ]' 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:45.025 06:31:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:45.282 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:20:45.282 06:31:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:00:MzcxYmU0Y2I0YWQ3NGM1NzA3NTdmN2QwOTcxODhlMjVkODRiYzQzYTdmY2QyOTYy+SpFNA==: --dhchap-ctrl-secret DHHC-1:03:MzU4OWI2NWFjY2RhOWZkZTZiNDdjYWI5MjMwZmI3NjkzZDM5MDc2ODc4Y2UzNmVjNjA0OTZjY2EzNmM3ZmE3ZQMtYrs=: 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:46.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:46.216 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:47.151 request: 00:20:47.151 { 00:20:47.151 "name": "nvme0", 00:20:47.151 "trtype": "tcp", 00:20:47.151 "traddr": "10.0.0.2", 00:20:47.151 "adrfam": "ipv4", 00:20:47.151 "trsvcid": "4420", 00:20:47.151 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:47.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:47.151 "prchk_reftag": false, 00:20:47.151 "prchk_guard": false, 00:20:47.151 "hdgst": false, 00:20:47.151 "ddgst": false, 00:20:47.151 "dhchap_key": "key2", 00:20:47.151 "allow_unrecognized_csi": false, 00:20:47.151 "method": "bdev_nvme_attach_controller", 00:20:47.151 "req_id": 1 00:20:47.151 } 00:20:47.151 Got JSON-RPC error response 00:20:47.151 response: 00:20:47.151 { 00:20:47.151 "code": -5, 00:20:47.151 "message": "Input/output error" 00:20:47.151 } 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:47.151 06:31:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:48.084 request: 00:20:48.084 { 00:20:48.084 "name": "nvme0", 00:20:48.084 "trtype": "tcp", 00:20:48.084 "traddr": "10.0.0.2", 00:20:48.084 "adrfam": "ipv4", 00:20:48.084 "trsvcid": "4420", 00:20:48.084 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:48.084 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:48.084 "prchk_reftag": false, 00:20:48.084 "prchk_guard": false, 00:20:48.084 "hdgst": false, 00:20:48.084 "ddgst": false, 00:20:48.084 "dhchap_key": "key1", 00:20:48.084 "dhchap_ctrlr_key": "ckey2", 00:20:48.084 "allow_unrecognized_csi": false, 00:20:48.084 "method": "bdev_nvme_attach_controller", 00:20:48.084 "req_id": 1 00:20:48.084 } 00:20:48.084 Got JSON-RPC error response 00:20:48.084 response: 00:20:48.084 { 00:20:48.084 "code": -5, 00:20:48.084 "message": "Input/output error" 00:20:48.084 } 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.084 06:31:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:49.018 request: 00:20:49.018 { 00:20:49.018 "name": "nvme0", 00:20:49.018 "trtype": "tcp", 00:20:49.018 "traddr": "10.0.0.2", 00:20:49.018 "adrfam": "ipv4", 00:20:49.018 "trsvcid": "4420", 00:20:49.018 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:49.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:49.018 "prchk_reftag": false, 00:20:49.018 "prchk_guard": false, 00:20:49.018 "hdgst": false, 00:20:49.018 "ddgst": false, 00:20:49.018 "dhchap_key": "key1", 00:20:49.018 "dhchap_ctrlr_key": "ckey1", 00:20:49.018 "allow_unrecognized_csi": false, 00:20:49.018 "method": "bdev_nvme_attach_controller", 00:20:49.018 "req_id": 1 00:20:49.018 } 00:20:49.018 Got JSON-RPC error response 00:20:49.018 response: 00:20:49.018 { 00:20:49.018 "code": -5, 00:20:49.018 "message": "Input/output error" 00:20:49.018 } 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2079140 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2079140 ']' 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2079140 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2079140 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2079140' 00:20:49.018 killing process with pid 2079140 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2079140 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2079140 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:49.018 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.278 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=2101910 00:20:49.278 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:49.278 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 2101910 00:20:49.278 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2101910 ']' 00:20:49.278 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.278 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:49.278 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.278 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:49.278 06:31:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2101910 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # '[' -z 2101910 ']' 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:49.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:20:49.536 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.794 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:20:49.794 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@866 -- # return 0 00:20:49.794 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:49.794 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:49.794 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.794 null0 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.K1Q 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.FXn ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.FXn 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.MXc 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.GMB ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.GMB 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.1k1 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.ONE ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ONE 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.2Ug 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.051 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:50.052 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:50.052 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:50.052 06:31:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:51.423 nvme0n1 00:20:51.423 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.423 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.423 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.681 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.681 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.681 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:51.681 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.681 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:51.681 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.681 { 00:20:51.681 "cntlid": 1, 00:20:51.681 "qid": 0, 00:20:51.681 "state": "enabled", 00:20:51.681 "thread": "nvmf_tgt_poll_group_000", 00:20:51.681 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:51.681 "listen_address": { 00:20:51.681 "trtype": "TCP", 00:20:51.681 "adrfam": "IPv4", 00:20:51.681 "traddr": "10.0.0.2", 00:20:51.681 "trsvcid": "4420" 00:20:51.681 }, 00:20:51.681 "peer_address": { 00:20:51.681 "trtype": "TCP", 00:20:51.681 "adrfam": "IPv4", 00:20:51.681 "traddr": "10.0.0.1", 00:20:51.681 "trsvcid": "32996" 00:20:51.681 }, 00:20:51.681 "auth": { 00:20:51.681 "state": "completed", 00:20:51.681 "digest": "sha512", 00:20:51.681 "dhgroup": "ffdhe8192" 00:20:51.681 } 00:20:51.681 } 00:20:51.681 ]' 00:20:51.681 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.681 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:51.681 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.938 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:51.938 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.938 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.938 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.938 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:52.196 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:52.196 06:31:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.128 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key3 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:53.128 06:31:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:53.385 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:53.385 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:53.385 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:53.385 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:53.385 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.385 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:53.385 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.385 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.385 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.385 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.643 request: 00:20:53.643 { 00:20:53.643 "name": "nvme0", 00:20:53.643 "trtype": "tcp", 00:20:53.643 "traddr": "10.0.0.2", 00:20:53.643 "adrfam": "ipv4", 00:20:53.643 "trsvcid": "4420", 00:20:53.643 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:53.643 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:53.643 "prchk_reftag": false, 00:20:53.643 "prchk_guard": false, 00:20:53.643 "hdgst": false, 00:20:53.643 "ddgst": false, 00:20:53.643 "dhchap_key": "key3", 00:20:53.643 "allow_unrecognized_csi": false, 00:20:53.643 "method": "bdev_nvme_attach_controller", 00:20:53.643 "req_id": 1 00:20:53.643 } 00:20:53.643 Got JSON-RPC error response 00:20:53.643 response: 00:20:53.643 { 00:20:53.643 "code": -5, 00:20:53.643 "message": "Input/output error" 00:20:53.643 } 00:20:53.643 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:53.643 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:53.643 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:53.643 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:53.643 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:53.643 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:53.643 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:53.643 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:53.899 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:53.899 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:53.899 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:53.899 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:53.899 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.899 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:53.899 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:53.899 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:53.899 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:53.899 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:54.156 request: 00:20:54.156 { 00:20:54.156 "name": "nvme0", 00:20:54.156 "trtype": "tcp", 00:20:54.156 "traddr": "10.0.0.2", 00:20:54.156 "adrfam": "ipv4", 00:20:54.156 "trsvcid": "4420", 00:20:54.157 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:54.157 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:54.157 "prchk_reftag": false, 00:20:54.157 "prchk_guard": false, 00:20:54.157 "hdgst": false, 00:20:54.157 "ddgst": false, 00:20:54.157 "dhchap_key": "key3", 00:20:54.157 "allow_unrecognized_csi": false, 00:20:54.157 "method": "bdev_nvme_attach_controller", 00:20:54.157 "req_id": 1 00:20:54.157 } 00:20:54.157 Got JSON-RPC error response 00:20:54.157 response: 00:20:54.157 { 00:20:54.157 "code": -5, 00:20:54.157 "message": "Input/output error" 00:20:54.157 } 00:20:54.157 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:54.157 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.157 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.157 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.157 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:54.157 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:54.157 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:54.157 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:54.157 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:54.157 06:31:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:54.414 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:54.415 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:54.980 request: 00:20:54.980 { 00:20:54.980 "name": "nvme0", 00:20:54.980 "trtype": "tcp", 00:20:54.980 "traddr": "10.0.0.2", 00:20:54.980 "adrfam": "ipv4", 00:20:54.980 "trsvcid": "4420", 00:20:54.980 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:54.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:54.980 "prchk_reftag": false, 00:20:54.980 "prchk_guard": false, 00:20:54.980 "hdgst": false, 00:20:54.980 "ddgst": false, 00:20:54.980 "dhchap_key": "key0", 00:20:54.980 "dhchap_ctrlr_key": "key1", 00:20:54.980 "allow_unrecognized_csi": false, 00:20:54.980 "method": "bdev_nvme_attach_controller", 00:20:54.980 "req_id": 1 00:20:54.980 } 00:20:54.980 Got JSON-RPC error response 00:20:54.980 response: 00:20:54.980 { 00:20:54.980 "code": -5, 00:20:54.980 "message": "Input/output error" 00:20:54.980 } 00:20:54.980 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:54.980 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:54.980 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:54.980 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:54.980 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:54.980 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:54.980 06:31:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:55.237 nvme0n1 00:20:55.237 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:55.237 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.237 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:55.494 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.494 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.494 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.752 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 00:20:55.752 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:55.752 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.752 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:55.752 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:55.752 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:55.752 06:31:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:57.124 nvme0n1 00:20:57.124 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:57.124 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:57.124 06:31:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.381 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.381 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:57.381 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:57.381 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.381 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:57.381 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:57.381 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:57.381 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.638 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.638 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:57.896 06:31:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid 29f67375-a902-e411-ace9-001e67bc3c9a -l 0 --dhchap-secret DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: --dhchap-ctrl-secret DHHC-1:03:MjJhOWM0YWRhMzg0YzI4YjVmODUyYzNmZmE2MGZiNmZkMmQyNGZiNDdiNDBhNDYwOTNhMDk1MmY1YmRiODc2Y2a+hnA=: 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:58.827 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=bdev_connect 00:20:58.828 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.828 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t bdev_connect 00:20:58.828 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:58.828 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:58.828 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:58.828 06:31:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:59.759 request: 00:20:59.759 { 00:20:59.759 "name": "nvme0", 00:20:59.759 "trtype": "tcp", 00:20:59.759 "traddr": "10.0.0.2", 00:20:59.759 "adrfam": "ipv4", 00:20:59.759 "trsvcid": "4420", 00:20:59.759 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:59.759 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a", 00:20:59.759 "prchk_reftag": false, 00:20:59.759 "prchk_guard": false, 00:20:59.759 "hdgst": false, 00:20:59.759 "ddgst": false, 00:20:59.759 "dhchap_key": "key1", 00:20:59.759 "allow_unrecognized_csi": false, 00:20:59.759 "method": "bdev_nvme_attach_controller", 00:20:59.759 "req_id": 1 00:20:59.759 } 00:20:59.759 Got JSON-RPC error response 00:20:59.759 response: 00:20:59.759 { 00:20:59.759 "code": -5, 00:20:59.759 "message": "Input/output error" 00:20:59.759 } 00:20:59.759 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:20:59.759 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:59.759 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:59.759 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:59.759 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:59.759 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:59.759 06:31:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:01.132 nvme0n1 00:21:01.132 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:21:01.132 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:21:01.132 06:31:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.390 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:01.390 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:01.390 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:01.674 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:01.674 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:01.674 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.674 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:01.674 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:21:01.674 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:01.674 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:21:01.952 nvme0n1 00:21:01.952 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:21:01.952 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:01.952 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:21:02.210 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.210 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.210 06:31:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.467 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: '' 2s 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: ]] 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTg5MDNiMTc2N2ZmNjM5NGE1MjViODg1N2E1MWZjNzMoPndF: 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:02.468 06:31:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key1 --dhchap-ctrlr-key key2 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: 2s 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: ]] 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTFhNDU0ZGJiMWE2NDk0ZTQ0YjUxZTc3NzliMDg0MjBjYTNmMDI3NWJiODAyMDBjzvBJsw==: 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:21:04.996 06:31:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1237 -- # local i=0 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1248 -- # return 0 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.895 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:06.895 06:31:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:08.267 nvme0n1 00:21:08.267 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:08.267 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:08.267 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.268 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:08.268 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:08.268 06:31:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:08.833 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:21:08.833 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.833 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:21:09.091 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.091 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:09.091 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.091 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.091 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.091 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:21:09.091 06:31:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:21:09.657 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:21:09.657 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:21:09.657 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:09.914 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:09.915 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:09.915 06:31:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:21:10.478 request: 00:21:10.478 { 00:21:10.478 "name": "nvme0", 00:21:10.478 "dhchap_key": "key1", 00:21:10.478 "dhchap_ctrlr_key": "key3", 00:21:10.478 "method": "bdev_nvme_set_keys", 00:21:10.478 "req_id": 1 00:21:10.478 } 00:21:10.478 Got JSON-RPC error response 00:21:10.478 response: 00:21:10.478 { 00:21:10.478 "code": -13, 00:21:10.478 "message": "Permission denied" 00:21:10.478 } 00:21:10.736 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:10.736 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:10.736 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:10.736 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:10.736 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:10.736 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:10.736 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.993 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:21:10.993 06:31:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:21:11.926 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:21:11.926 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:21:11.926 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:12.184 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:21:12.184 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key0 --dhchap-ctrlr-key key1 00:21:12.184 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:12.184 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.184 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:12.184 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:12.184 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:12.185 06:31:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:21:13.570 nvme0n1 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --dhchap-key key2 --dhchap-ctrlr-key key3 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:13.570 06:31:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:21:14.506 request: 00:21:14.506 { 00:21:14.506 "name": "nvme0", 00:21:14.506 "dhchap_key": "key2", 00:21:14.506 "dhchap_ctrlr_key": "key0", 00:21:14.506 "method": "bdev_nvme_set_keys", 00:21:14.506 "req_id": 1 00:21:14.506 } 00:21:14.506 Got JSON-RPC error response 00:21:14.506 response: 00:21:14.506 { 00:21:14.506 "code": -13, 00:21:14.506 "message": "Permission denied" 00:21:14.506 } 00:21:14.506 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:21:14.506 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:14.506 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:14.506 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:14.506 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:14.506 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.506 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:14.764 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:21:14.764 06:31:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:21:15.698 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:21:15.698 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:15.698 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2079171 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2079171 ']' 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2079171 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2079171 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2079171' 00:21:15.956 killing process with pid 2079171 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2079171 00:21:15.956 06:31:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2079171 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:16.521 rmmod nvme_tcp 00:21:16.521 rmmod nvme_fabrics 00:21:16.521 rmmod nvme_keyring 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 2101910 ']' 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 2101910 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # '[' -z 2101910 ']' 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # kill -0 2101910 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # uname 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2101910 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2101910' 00:21:16.521 killing process with pid 2101910 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@971 -- # kill 2101910 00:21:16.521 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@976 -- # wait 2101910 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:16.781 06:31:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.688 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:18.688 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.K1Q /tmp/spdk.key-sha256.MXc /tmp/spdk.key-sha384.1k1 /tmp/spdk.key-sha512.2Ug /tmp/spdk.key-sha512.FXn /tmp/spdk.key-sha384.GMB /tmp/spdk.key-sha256.ONE '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:21:18.688 00:21:18.688 real 3m32.254s 00:21:18.688 user 8m18.645s 00:21:18.688 sys 0m27.764s 00:21:18.688 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:18.688 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.688 ************************************ 00:21:18.688 END TEST nvmf_auth_target 00:21:18.688 ************************************ 00:21:18.688 06:31:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:21:18.688 06:31:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:18.688 06:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:21:18.688 06:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:18.688 06:31:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:18.688 ************************************ 00:21:18.688 START TEST nvmf_bdevio_no_huge 00:21:18.688 ************************************ 00:21:18.688 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:21:18.947 * Looking for test storage... 00:21:18.947 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:18.947 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lcov --version 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.948 --rc genhtml_branch_coverage=1 00:21:18.948 --rc genhtml_function_coverage=1 00:21:18.948 --rc genhtml_legend=1 00:21:18.948 --rc geninfo_all_blocks=1 00:21:18.948 --rc geninfo_unexecuted_blocks=1 00:21:18.948 00:21:18.948 ' 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.948 --rc genhtml_branch_coverage=1 00:21:18.948 --rc genhtml_function_coverage=1 00:21:18.948 --rc genhtml_legend=1 00:21:18.948 --rc geninfo_all_blocks=1 00:21:18.948 --rc geninfo_unexecuted_blocks=1 00:21:18.948 00:21:18.948 ' 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.948 --rc genhtml_branch_coverage=1 00:21:18.948 --rc genhtml_function_coverage=1 00:21:18.948 --rc genhtml_legend=1 00:21:18.948 --rc geninfo_all_blocks=1 00:21:18.948 --rc geninfo_unexecuted_blocks=1 00:21:18.948 00:21:18.948 ' 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:18.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.948 --rc genhtml_branch_coverage=1 00:21:18.948 --rc genhtml_function_coverage=1 00:21:18.948 --rc genhtml_legend=1 00:21:18.948 --rc geninfo_all_blocks=1 00:21:18.948 --rc geninfo_unexecuted_blocks=1 00:21:18.948 00:21:18.948 ' 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.948 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:18.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:21:18.949 06:31:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:21.481 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:21.482 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:21.482 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:21.482 Found net devices under 0000:09:00.0: cvl_0_0 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:21.482 Found net devices under 0000:09:00.1: cvl_0_1 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:21.482 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:21.482 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.350 ms 00:21:21.482 00:21:21.482 --- 10.0.0.2 ping statistics --- 00:21:21.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.482 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:21.482 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:21.482 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:21:21.482 00:21:21.482 --- 10.0.0.1 ping statistics --- 00:21:21.482 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:21.482 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=2107167 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 2107167 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # '[' -z 2107167 ']' 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:21.482 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:21.483 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:21.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:21.483 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:21.483 06:31:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.483 [2024-11-20 06:31:53.031799] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:21:21.483 [2024-11-20 06:31:53.031882] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:21:21.483 [2024-11-20 06:31:53.111014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:21.483 [2024-11-20 06:31:53.172823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:21.483 [2024-11-20 06:31:53.172894] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:21.483 [2024-11-20 06:31:53.172916] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:21.483 [2024-11-20 06:31:53.172926] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:21.483 [2024-11-20 06:31:53.172936] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:21.483 [2024-11-20 06:31:53.173995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:21.483 [2024-11-20 06:31:53.174066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:21:21.483 [2024-11-20 06:31:53.174143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:21:21.483 [2024-11-20 06:31:53.174148] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:21.483 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:21.483 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@866 -- # return 0 00:21:21.483 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:21.483 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:21.483 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.742 [2024-11-20 06:31:53.327028] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.742 Malloc0 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:21.742 [2024-11-20 06:31:53.365177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:21:21.742 { 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme$subsystem", 00:21:21.742 "trtype": "$TEST_TRANSPORT", 00:21:21.742 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:21.742 "adrfam": "ipv4", 00:21:21.742 "trsvcid": "$NVMF_PORT", 00:21:21.742 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:21.742 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:21.742 "hdgst": ${hdgst:-false}, 00:21:21.742 "ddgst": ${ddgst:-false} 00:21:21.742 }, 00:21:21.742 "method": "bdev_nvme_attach_controller" 00:21:21.742 } 00:21:21.742 EOF 00:21:21.742 )") 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:21:21.742 06:31:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:21:21.742 "params": { 00:21:21.742 "name": "Nvme1", 00:21:21.742 "trtype": "tcp", 00:21:21.742 "traddr": "10.0.0.2", 00:21:21.742 "adrfam": "ipv4", 00:21:21.742 "trsvcid": "4420", 00:21:21.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:21.742 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:21.742 "hdgst": false, 00:21:21.742 "ddgst": false 00:21:21.742 }, 00:21:21.742 "method": "bdev_nvme_attach_controller" 00:21:21.742 }' 00:21:21.742 [2024-11-20 06:31:53.414524] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:21:21.742 [2024-11-20 06:31:53.414631] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2107312 ] 00:21:21.742 [2024-11-20 06:31:53.488831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:21.742 [2024-11-20 06:31:53.554939] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.742 [2024-11-20 06:31:53.554994] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:21.742 [2024-11-20 06:31:53.554998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:22.001 I/O targets: 00:21:22.001 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:21:22.001 00:21:22.001 00:21:22.001 CUnit - A unit testing framework for C - Version 2.1-3 00:21:22.001 http://cunit.sourceforge.net/ 00:21:22.001 00:21:22.001 00:21:22.001 Suite: bdevio tests on: Nvme1n1 00:21:22.001 Test: blockdev write read block ...passed 00:21:22.259 Test: blockdev write zeroes read block ...passed 00:21:22.259 Test: blockdev write zeroes read no split ...passed 00:21:22.259 Test: blockdev write zeroes read split ...passed 00:21:22.259 Test: blockdev write zeroes read split partial ...passed 00:21:22.259 Test: blockdev reset ...[2024-11-20 06:31:53.871647] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:22.259 [2024-11-20 06:31:53.871762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x20b66e0 (9): Bad file descriptor 00:21:22.259 [2024-11-20 06:31:53.889212] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:21:22.259 passed 00:21:22.259 Test: blockdev write read 8 blocks ...passed 00:21:22.259 Test: blockdev write read size > 128k ...passed 00:21:22.259 Test: blockdev write read invalid size ...passed 00:21:22.259 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:22.259 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:22.259 Test: blockdev write read max offset ...passed 00:21:22.259 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:22.259 Test: blockdev writev readv 8 blocks ...passed 00:21:22.259 Test: blockdev writev readv 30 x 1block ...passed 00:21:22.517 Test: blockdev writev readv block ...passed 00:21:22.517 Test: blockdev writev readv size > 128k ...passed 00:21:22.517 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:22.517 Test: blockdev comparev and writev ...[2024-11-20 06:31:54.143410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.517 [2024-11-20 06:31:54.143447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:21:22.517 [2024-11-20 06:31:54.143472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.517 [2024-11-20 06:31:54.143496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:21:22.517 [2024-11-20 06:31:54.143862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.517 [2024-11-20 06:31:54.143896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:21:22.517 [2024-11-20 06:31:54.143930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.517 [2024-11-20 06:31:54.143958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:21:22.517 [2024-11-20 06:31:54.144334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.517 [2024-11-20 06:31:54.144369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:21:22.517 [2024-11-20 06:31:54.144403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.517 [2024-11-20 06:31:54.144429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:21:22.517 [2024-11-20 06:31:54.144804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.517 [2024-11-20 06:31:54.144838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:21:22.517 [2024-11-20 06:31:54.144870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:21:22.517 [2024-11-20 06:31:54.144899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:21:22.517 passed 00:21:22.517 Test: blockdev nvme passthru rw ...passed 00:21:22.517 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:31:54.226581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:22.517 [2024-11-20 06:31:54.226612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:21:22.517 [2024-11-20 06:31:54.226758] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:22.517 [2024-11-20 06:31:54.226781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:21:22.517 [2024-11-20 06:31:54.226927] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:22.517 [2024-11-20 06:31:54.226951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:21:22.517 [2024-11-20 06:31:54.227096] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:22.517 [2024-11-20 06:31:54.227121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:21:22.517 passed 00:21:22.517 Test: blockdev nvme admin passthru ...passed 00:21:22.517 Test: blockdev copy ...passed 00:21:22.517 00:21:22.517 Run Summary: Type Total Ran Passed Failed Inactive 00:21:22.517 suites 1 1 n/a 0 0 00:21:22.517 tests 23 23 23 0 0 00:21:22.517 asserts 152 152 152 0 n/a 00:21:22.517 00:21:22.517 Elapsed time = 1.065 seconds 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:21:23.085 rmmod nvme_tcp 00:21:23.085 rmmod nvme_fabrics 00:21:23.085 rmmod nvme_keyring 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 2107167 ']' 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 2107167 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # '[' -z 2107167 ']' 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # kill -0 2107167 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # uname 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2107167 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2107167' 00:21:23.085 killing process with pid 2107167 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@971 -- # kill 2107167 00:21:23.085 06:31:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@976 -- # wait 2107167 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:23.346 06:31:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:21:25.882 00:21:25.882 real 0m6.651s 00:21:25.882 user 0m10.247s 00:21:25.882 sys 0m2.691s 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1128 -- # xtrace_disable 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:21:25.882 ************************************ 00:21:25.882 END TEST nvmf_bdevio_no_huge 00:21:25.882 ************************************ 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:25.882 ************************************ 00:21:25.882 START TEST nvmf_tls 00:21:25.882 ************************************ 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:21:25.882 * Looking for test storage... 00:21:25.882 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lcov --version 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:21:25.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.882 --rc genhtml_branch_coverage=1 00:21:25.882 --rc genhtml_function_coverage=1 00:21:25.882 --rc genhtml_legend=1 00:21:25.882 --rc geninfo_all_blocks=1 00:21:25.882 --rc geninfo_unexecuted_blocks=1 00:21:25.882 00:21:25.882 ' 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:21:25.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.882 --rc genhtml_branch_coverage=1 00:21:25.882 --rc genhtml_function_coverage=1 00:21:25.882 --rc genhtml_legend=1 00:21:25.882 --rc geninfo_all_blocks=1 00:21:25.882 --rc geninfo_unexecuted_blocks=1 00:21:25.882 00:21:25.882 ' 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:21:25.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.882 --rc genhtml_branch_coverage=1 00:21:25.882 --rc genhtml_function_coverage=1 00:21:25.882 --rc genhtml_legend=1 00:21:25.882 --rc geninfo_all_blocks=1 00:21:25.882 --rc geninfo_unexecuted_blocks=1 00:21:25.882 00:21:25.882 ' 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:21:25.882 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:25.882 --rc genhtml_branch_coverage=1 00:21:25.882 --rc genhtml_function_coverage=1 00:21:25.882 --rc genhtml_legend=1 00:21:25.882 --rc geninfo_all_blocks=1 00:21:25.882 --rc geninfo_unexecuted_blocks=1 00:21:25.882 00:21:25.882 ' 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:25.882 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:21:25.883 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:21:25.883 06:31:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:21:27.785 Found 0000:09:00.0 (0x8086 - 0x159b) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:21:27.785 Found 0000:09:00.1 (0x8086 - 0x159b) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.785 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:21:27.786 Found net devices under 0000:09:00.0: cvl_0_0 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:21:27.786 Found net devices under 0000:09:00.1: cvl_0_1 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:21:27.786 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:21:28.044 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:28.044 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.394 ms 00:21:28.044 00:21:28.044 --- 10.0.0.2 ping statistics --- 00:21:28.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.044 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:28.044 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:28.044 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:21:28.044 00:21:28.044 --- 10.0.0.1 ping statistics --- 00:21:28.044 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:28.044 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2109396 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2109396 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2109396 ']' 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:28.044 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:28.044 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.044 [2024-11-20 06:31:59.733010] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:21:28.044 [2024-11-20 06:31:59.733090] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:28.044 [2024-11-20 06:31:59.807761] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.044 [2024-11-20 06:31:59.866039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:28.044 [2024-11-20 06:31:59.866094] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:28.044 [2024-11-20 06:31:59.866107] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:28.044 [2024-11-20 06:31:59.866118] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:28.044 [2024-11-20 06:31:59.866127] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:28.044 [2024-11-20 06:31:59.866737] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:28.301 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:28.301 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:28.301 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:21:28.301 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:28.302 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:28.302 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:28.302 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:21:28.302 06:31:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:21:28.559 true 00:21:28.559 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:21:28.559 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:28.818 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:21:28.818 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:21:28.818 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:29.076 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:29.076 06:32:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:21:29.334 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:21:29.334 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:21:29.334 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:21:29.592 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:29.592 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:21:30.182 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:21:30.182 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:21:30.182 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:30.182 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:21:30.182 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:21:30.182 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:21:30.182 06:32:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:21:30.441 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:30.441 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:21:30.699 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:21:30.699 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:21:30.699 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:21:30.957 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:21:30.957 06:32:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.NLUH5B8Prh 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.I17JezJCP7 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.NLUH5B8Prh 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.I17JezJCP7 00:21:31.521 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:21:31.778 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:21:32.037 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.NLUH5B8Prh 00:21:32.037 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.NLUH5B8Prh 00:21:32.037 06:32:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:32.603 [2024-11-20 06:32:04.169771] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.603 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:32.860 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:33.117 [2024-11-20 06:32:04.807414] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:33.117 [2024-11-20 06:32:04.807638] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:33.117 06:32:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:33.375 malloc0 00:21:33.375 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:33.633 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.NLUH5B8Prh 00:21:33.892 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:34.150 06:32:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.NLUH5B8Prh 00:21:46.412 Initializing NVMe Controllers 00:21:46.412 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:46.412 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:46.412 Initialization complete. Launching workers. 00:21:46.412 ======================================================== 00:21:46.412 Latency(us) 00:21:46.412 Device Information : IOPS MiB/s Average min max 00:21:46.412 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8642.95 33.76 7406.90 1076.17 8833.42 00:21:46.412 ======================================================== 00:21:46.412 Total : 8642.95 33.76 7406.90 1076.17 8833.42 00:21:46.412 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.NLUH5B8Prh 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NLUH5B8Prh 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2111417 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2111417 /var/tmp/bdevperf.sock 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2111417 ']' 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:46.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:46.412 [2024-11-20 06:32:16.124270] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:21:46.412 [2024-11-20 06:32:16.124416] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2111417 ] 00:21:46.412 [2024-11-20 06:32:16.189540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:46.412 [2024-11-20 06:32:16.247141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NLUH5B8Prh 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:46.412 [2024-11-20 06:32:16.901631] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:46.412 TLSTESTn1 00:21:46.412 06:32:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:46.412 Running I/O for 10 seconds... 00:21:47.346 3581.00 IOPS, 13.99 MiB/s [2024-11-20T05:32:20.113Z] 3531.00 IOPS, 13.79 MiB/s [2024-11-20T05:32:21.487Z] 3556.33 IOPS, 13.89 MiB/s [2024-11-20T05:32:22.419Z] 3565.50 IOPS, 13.93 MiB/s [2024-11-20T05:32:23.353Z] 3545.20 IOPS, 13.85 MiB/s [2024-11-20T05:32:24.285Z] 3555.00 IOPS, 13.89 MiB/s [2024-11-20T05:32:25.217Z] 3572.43 IOPS, 13.95 MiB/s [2024-11-20T05:32:26.151Z] 3580.62 IOPS, 13.99 MiB/s [2024-11-20T05:32:27.524Z] 3587.22 IOPS, 14.01 MiB/s [2024-11-20T05:32:27.524Z] 3596.50 IOPS, 14.05 MiB/s 00:21:55.688 Latency(us) 00:21:55.688 [2024-11-20T05:32:27.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.688 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:55.688 Verification LBA range: start 0x0 length 0x2000 00:21:55.688 TLSTESTn1 : 10.02 3602.38 14.07 0.00 0.00 35474.30 6893.42 30098.01 00:21:55.688 [2024-11-20T05:32:27.524Z] =================================================================================================================== 00:21:55.688 [2024-11-20T05:32:27.524Z] Total : 3602.38 14.07 0.00 0.00 35474.30 6893.42 30098.01 00:21:55.688 { 00:21:55.688 "results": [ 00:21:55.688 { 00:21:55.688 "job": "TLSTESTn1", 00:21:55.688 "core_mask": "0x4", 00:21:55.688 "workload": "verify", 00:21:55.688 "status": "finished", 00:21:55.688 "verify_range": { 00:21:55.688 "start": 0, 00:21:55.688 "length": 8192 00:21:55.688 }, 00:21:55.688 "queue_depth": 128, 00:21:55.688 "io_size": 4096, 00:21:55.688 "runtime": 10.018941, 00:21:55.688 "iops": 3602.376738220137, 00:21:55.688 "mibps": 14.071784133672411, 00:21:55.688 "io_failed": 0, 00:21:55.688 "io_timeout": 0, 00:21:55.688 "avg_latency_us": 35474.30059379117, 00:21:55.688 "min_latency_us": 6893.416296296296, 00:21:55.688 "max_latency_us": 30098.014814814815 00:21:55.688 } 00:21:55.688 ], 00:21:55.688 "core_count": 1 00:21:55.688 } 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2111417 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2111417 ']' 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2111417 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2111417 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2111417' 00:21:55.688 killing process with pid 2111417 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2111417 00:21:55.688 Received shutdown signal, test time was about 10.000000 seconds 00:21:55.688 00:21:55.688 Latency(us) 00:21:55.688 [2024-11-20T05:32:27.524Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.688 [2024-11-20T05:32:27.524Z] =================================================================================================================== 00:21:55.688 [2024-11-20T05:32:27.524Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2111417 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I17JezJCP7 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I17JezJCP7 00:21:55.688 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.I17JezJCP7 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.I17JezJCP7 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2112738 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2112738 /var/tmp/bdevperf.sock 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2112738 ']' 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:55.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:55.689 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.689 [2024-11-20 06:32:27.469838] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:21:55.689 [2024-11-20 06:32:27.469939] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112738 ] 00:21:55.947 [2024-11-20 06:32:27.538421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.947 [2024-11-20 06:32:27.593746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.947 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:55.947 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:55.947 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.I17JezJCP7 00:21:56.205 06:32:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:56.463 [2024-11-20 06:32:28.234157] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:56.463 [2024-11-20 06:32:28.239761] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:56.463 [2024-11-20 06:32:28.240264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3d2c0 (107): Transport endpoint is not connected 00:21:56.463 [2024-11-20 06:32:28.241255] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb3d2c0 (9): Bad file descriptor 00:21:56.463 [2024-11-20 06:32:28.242254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:56.463 [2024-11-20 06:32:28.242275] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:56.463 [2024-11-20 06:32:28.242310] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:56.463 [2024-11-20 06:32:28.242329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:56.463 request: 00:21:56.463 { 00:21:56.463 "name": "TLSTEST", 00:21:56.463 "trtype": "tcp", 00:21:56.463 "traddr": "10.0.0.2", 00:21:56.463 "adrfam": "ipv4", 00:21:56.463 "trsvcid": "4420", 00:21:56.463 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.463 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:56.463 "prchk_reftag": false, 00:21:56.463 "prchk_guard": false, 00:21:56.463 "hdgst": false, 00:21:56.463 "ddgst": false, 00:21:56.463 "psk": "key0", 00:21:56.463 "allow_unrecognized_csi": false, 00:21:56.463 "method": "bdev_nvme_attach_controller", 00:21:56.463 "req_id": 1 00:21:56.463 } 00:21:56.463 Got JSON-RPC error response 00:21:56.463 response: 00:21:56.463 { 00:21:56.463 "code": -5, 00:21:56.463 "message": "Input/output error" 00:21:56.463 } 00:21:56.463 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2112738 00:21:56.463 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2112738 ']' 00:21:56.463 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2112738 00:21:56.463 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:56.463 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:56.463 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2112738 00:21:56.463 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:56.463 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:56.463 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2112738' 00:21:56.463 killing process with pid 2112738 00:21:56.463 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2112738 00:21:56.463 Received shutdown signal, test time was about 10.000000 seconds 00:21:56.463 00:21:56.463 Latency(us) 00:21:56.463 [2024-11-20T05:32:28.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.463 [2024-11-20T05:32:28.299Z] =================================================================================================================== 00:21:56.463 [2024-11-20T05:32:28.299Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:56.464 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2112738 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NLUH5B8Prh 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NLUH5B8Prh 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.NLUH5B8Prh 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NLUH5B8Prh 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2112884 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2112884 /var/tmp/bdevperf.sock 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2112884 ']' 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:56.722 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.722 [2024-11-20 06:32:28.544701] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:21:56.722 [2024-11-20 06:32:28.544799] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112884 ] 00:21:56.980 [2024-11-20 06:32:28.612162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.980 [2024-11-20 06:32:28.668294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:56.980 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:56.980 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:56.980 06:32:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NLUH5B8Prh 00:21:57.237 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:57.497 [2024-11-20 06:32:29.292987] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:57.497 [2024-11-20 06:32:29.300874] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:57.497 [2024-11-20 06:32:29.300903] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:57.497 [2024-11-20 06:32:29.300939] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:57.497 [2024-11-20 06:32:29.301074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ce2c0 (107): Transport endpoint is not connected 00:21:57.497 [2024-11-20 06:32:29.302062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ce2c0 (9): Bad file descriptor 00:21:57.497 [2024-11-20 06:32:29.303061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:57.497 [2024-11-20 06:32:29.303081] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:57.497 [2024-11-20 06:32:29.303093] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:57.497 [2024-11-20 06:32:29.303110] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:57.497 request: 00:21:57.497 { 00:21:57.497 "name": "TLSTEST", 00:21:57.497 "trtype": "tcp", 00:21:57.497 "traddr": "10.0.0.2", 00:21:57.497 "adrfam": "ipv4", 00:21:57.497 "trsvcid": "4420", 00:21:57.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.497 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:57.497 "prchk_reftag": false, 00:21:57.497 "prchk_guard": false, 00:21:57.497 "hdgst": false, 00:21:57.497 "ddgst": false, 00:21:57.497 "psk": "key0", 00:21:57.497 "allow_unrecognized_csi": false, 00:21:57.497 "method": "bdev_nvme_attach_controller", 00:21:57.497 "req_id": 1 00:21:57.497 } 00:21:57.497 Got JSON-RPC error response 00:21:57.497 response: 00:21:57.497 { 00:21:57.497 "code": -5, 00:21:57.497 "message": "Input/output error" 00:21:57.497 } 00:21:57.497 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2112884 00:21:57.497 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2112884 ']' 00:21:57.497 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2112884 00:21:57.497 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:57.497 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:57.497 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2112884 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2112884' 00:21:57.759 killing process with pid 2112884 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2112884 00:21:57.759 Received shutdown signal, test time was about 10.000000 seconds 00:21:57.759 00:21:57.759 Latency(us) 00:21:57.759 [2024-11-20T05:32:29.595Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:57.759 [2024-11-20T05:32:29.595Z] =================================================================================================================== 00:21:57.759 [2024-11-20T05:32:29.595Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2112884 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NLUH5B8Prh 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NLUH5B8Prh 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.NLUH5B8Prh 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.NLUH5B8Prh 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2112987 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2112987 /var/tmp/bdevperf.sock 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2112987 ']' 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:57.759 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:58.017 [2024-11-20 06:32:29.629441] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:21:58.017 [2024-11-20 06:32:29.629547] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2112987 ] 00:21:58.017 [2024-11-20 06:32:29.695568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:58.017 [2024-11-20 06:32:29.752781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:58.276 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:58.276 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:58.276 06:32:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.NLUH5B8Prh 00:21:58.534 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:58.792 [2024-11-20 06:32:30.388855] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.792 [2024-11-20 06:32:30.394639] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:58.792 [2024-11-20 06:32:30.394672] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:58.792 [2024-11-20 06:32:30.394713] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:58.792 [2024-11-20 06:32:30.395217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e452c0 (107): Transport endpoint is not connected 00:21:58.792 [2024-11-20 06:32:30.396206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e452c0 (9): Bad file descriptor 00:21:58.792 [2024-11-20 06:32:30.397205] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:58.792 [2024-11-20 06:32:30.397227] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:58.792 [2024-11-20 06:32:30.397240] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:58.792 [2024-11-20 06:32:30.397258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:58.792 request: 00:21:58.792 { 00:21:58.792 "name": "TLSTEST", 00:21:58.792 "trtype": "tcp", 00:21:58.792 "traddr": "10.0.0.2", 00:21:58.792 "adrfam": "ipv4", 00:21:58.792 "trsvcid": "4420", 00:21:58.792 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:58.792 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:58.792 "prchk_reftag": false, 00:21:58.792 "prchk_guard": false, 00:21:58.792 "hdgst": false, 00:21:58.792 "ddgst": false, 00:21:58.792 "psk": "key0", 00:21:58.792 "allow_unrecognized_csi": false, 00:21:58.792 "method": "bdev_nvme_attach_controller", 00:21:58.792 "req_id": 1 00:21:58.792 } 00:21:58.792 Got JSON-RPC error response 00:21:58.792 response: 00:21:58.792 { 00:21:58.792 "code": -5, 00:21:58.792 "message": "Input/output error" 00:21:58.792 } 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2112987 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2112987 ']' 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2112987 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2112987 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2112987' 00:21:58.792 killing process with pid 2112987 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2112987 00:21:58.792 Received shutdown signal, test time was about 10.000000 seconds 00:21:58.792 00:21:58.792 Latency(us) 00:21:58.792 [2024-11-20T05:32:30.628Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:58.792 [2024-11-20T05:32:30.628Z] =================================================================================================================== 00:21:58.792 [2024-11-20T05:32:30.628Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:58.792 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2112987 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2113095 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2113095 /var/tmp/bdevperf.sock 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2113095 ']' 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.051 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:21:59.051 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:59.051 [2024-11-20 06:32:30.733993] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:21:59.051 [2024-11-20 06:32:30.734105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113095 ] 00:21:59.051 [2024-11-20 06:32:30.800228] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.051 [2024-11-20 06:32:30.857682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:59.309 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:21:59.309 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:21:59.309 06:32:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:59.568 [2024-11-20 06:32:31.218076] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:59.568 [2024-11-20 06:32:31.218127] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:59.568 request: 00:21:59.568 { 00:21:59.568 "name": "key0", 00:21:59.568 "path": "", 00:21:59.568 "method": "keyring_file_add_key", 00:21:59.568 "req_id": 1 00:21:59.568 } 00:21:59.568 Got JSON-RPC error response 00:21:59.568 response: 00:21:59.568 { 00:21:59.568 "code": -1, 00:21:59.568 "message": "Operation not permitted" 00:21:59.568 } 00:21:59.568 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:59.826 [2024-11-20 06:32:31.502979] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:59.826 [2024-11-20 06:32:31.503054] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:59.826 request: 00:21:59.826 { 00:21:59.826 "name": "TLSTEST", 00:21:59.826 "trtype": "tcp", 00:21:59.826 "traddr": "10.0.0.2", 00:21:59.826 "adrfam": "ipv4", 00:21:59.826 "trsvcid": "4420", 00:21:59.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:59.826 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:59.826 "prchk_reftag": false, 00:21:59.826 "prchk_guard": false, 00:21:59.826 "hdgst": false, 00:21:59.826 "ddgst": false, 00:21:59.826 "psk": "key0", 00:21:59.826 "allow_unrecognized_csi": false, 00:21:59.826 "method": "bdev_nvme_attach_controller", 00:21:59.826 "req_id": 1 00:21:59.826 } 00:21:59.826 Got JSON-RPC error response 00:21:59.826 response: 00:21:59.826 { 00:21:59.826 "code": -126, 00:21:59.826 "message": "Required key not available" 00:21:59.826 } 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2113095 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2113095 ']' 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2113095 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2113095 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2113095' 00:21:59.826 killing process with pid 2113095 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2113095 00:21:59.826 Received shutdown signal, test time was about 10.000000 seconds 00:21:59.826 00:21:59.826 Latency(us) 00:21:59.826 [2024-11-20T05:32:31.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.826 [2024-11-20T05:32:31.662Z] =================================================================================================================== 00:21:59.826 [2024-11-20T05:32:31.662Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:59.826 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2113095 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 2109396 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2109396 ']' 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2109396 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2109396 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2109396' 00:22:00.084 killing process with pid 2109396 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2109396 00:22:00.084 06:32:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2109396 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.jMKaJyqxjB 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.jMKaJyqxjB 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2113317 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2113317 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2113317 ']' 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:00.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:00.343 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.343 [2024-11-20 06:32:32.125816] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:00.343 [2024-11-20 06:32:32.125919] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:00.601 [2024-11-20 06:32:32.198122] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:00.601 [2024-11-20 06:32:32.249734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:00.601 [2024-11-20 06:32:32.249794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:00.601 [2024-11-20 06:32:32.249819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:00.601 [2024-11-20 06:32:32.249845] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:00.601 [2024-11-20 06:32:32.249854] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:00.601 [2024-11-20 06:32:32.250413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:00.601 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:00.601 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:00.601 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:00.601 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:00.601 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:00.601 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:00.601 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.jMKaJyqxjB 00:22:00.601 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jMKaJyqxjB 00:22:00.601 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:00.860 [2024-11-20 06:32:32.634473] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:00.860 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:01.117 06:32:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:01.403 [2024-11-20 06:32:33.175924] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:01.403 [2024-11-20 06:32:33.176176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:01.403 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:01.661 malloc0 00:22:01.661 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:01.918 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jMKaJyqxjB 00:22:02.175 06:32:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jMKaJyqxjB 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jMKaJyqxjB 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2113602 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2113602 /var/tmp/bdevperf.sock 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2113602 ']' 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:02.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:02.433 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.690 [2024-11-20 06:32:34.309530] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:02.690 [2024-11-20 06:32:34.309636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2113602 ] 00:22:02.690 [2024-11-20 06:32:34.375858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:02.690 [2024-11-20 06:32:34.432991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:02.947 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:02.947 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:02.947 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jMKaJyqxjB 00:22:03.204 06:32:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:03.462 [2024-11-20 06:32:35.057909] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:03.462 TLSTESTn1 00:22:03.462 06:32:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:03.462 Running I/O for 10 seconds... 00:22:05.765 3090.00 IOPS, 12.07 MiB/s [2024-11-20T05:32:38.534Z] 3200.00 IOPS, 12.50 MiB/s [2024-11-20T05:32:39.467Z] 3247.67 IOPS, 12.69 MiB/s [2024-11-20T05:32:40.400Z] 3255.75 IOPS, 12.72 MiB/s [2024-11-20T05:32:41.333Z] 3255.80 IOPS, 12.72 MiB/s [2024-11-20T05:32:42.705Z] 3268.83 IOPS, 12.77 MiB/s [2024-11-20T05:32:43.638Z] 3253.86 IOPS, 12.71 MiB/s [2024-11-20T05:32:44.571Z] 3266.12 IOPS, 12.76 MiB/s [2024-11-20T05:32:45.504Z] 3268.22 IOPS, 12.77 MiB/s [2024-11-20T05:32:45.504Z] 3268.70 IOPS, 12.77 MiB/s 00:22:13.668 Latency(us) 00:22:13.668 [2024-11-20T05:32:45.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.668 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:13.668 Verification LBA range: start 0x0 length 0x2000 00:22:13.668 TLSTESTn1 : 10.03 3272.43 12.78 0.00 0.00 39035.69 11650.84 56312.41 00:22:13.668 [2024-11-20T05:32:45.504Z] =================================================================================================================== 00:22:13.668 [2024-11-20T05:32:45.504Z] Total : 3272.43 12.78 0.00 0.00 39035.69 11650.84 56312.41 00:22:13.668 { 00:22:13.668 "results": [ 00:22:13.668 { 00:22:13.668 "job": "TLSTESTn1", 00:22:13.668 "core_mask": "0x4", 00:22:13.668 "workload": "verify", 00:22:13.668 "status": "finished", 00:22:13.668 "verify_range": { 00:22:13.668 "start": 0, 00:22:13.668 "length": 8192 00:22:13.668 }, 00:22:13.668 "queue_depth": 128, 00:22:13.668 "io_size": 4096, 00:22:13.668 "runtime": 10.027409, 00:22:13.668 "iops": 3272.4305949822133, 00:22:13.668 "mibps": 12.78293201164927, 00:22:13.668 "io_failed": 0, 00:22:13.668 "io_timeout": 0, 00:22:13.668 "avg_latency_us": 39035.6932102603, 00:22:13.668 "min_latency_us": 11650.844444444445, 00:22:13.668 "max_latency_us": 56312.414814814816 00:22:13.668 } 00:22:13.668 ], 00:22:13.668 "core_count": 1 00:22:13.668 } 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2113602 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2113602 ']' 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2113602 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2113602 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2113602' 00:22:13.668 killing process with pid 2113602 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2113602 00:22:13.668 Received shutdown signal, test time was about 10.000000 seconds 00:22:13.668 00:22:13.668 Latency(us) 00:22:13.668 [2024-11-20T05:32:45.504Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:13.668 [2024-11-20T05:32:45.504Z] =================================================================================================================== 00:22:13.668 [2024-11-20T05:32:45.504Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:13.668 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2113602 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.jMKaJyqxjB 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jMKaJyqxjB 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jMKaJyqxjB 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.jMKaJyqxjB 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.jMKaJyqxjB 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2114924 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2114924 /var/tmp/bdevperf.sock 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2114924 ']' 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:13.927 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:13.927 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:13.927 [2024-11-20 06:32:45.644709] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:13.927 [2024-11-20 06:32:45.644808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2114924 ] 00:22:13.927 [2024-11-20 06:32:45.710627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:14.186 [2024-11-20 06:32:45.768047] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:14.186 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:14.186 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:14.186 06:32:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jMKaJyqxjB 00:22:14.444 [2024-11-20 06:32:46.128334] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jMKaJyqxjB': 0100666 00:22:14.444 [2024-11-20 06:32:46.128376] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:14.444 request: 00:22:14.444 { 00:22:14.444 "name": "key0", 00:22:14.444 "path": "/tmp/tmp.jMKaJyqxjB", 00:22:14.444 "method": "keyring_file_add_key", 00:22:14.444 "req_id": 1 00:22:14.444 } 00:22:14.444 Got JSON-RPC error response 00:22:14.444 response: 00:22:14.444 { 00:22:14.444 "code": -1, 00:22:14.444 "message": "Operation not permitted" 00:22:14.444 } 00:22:14.444 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:14.702 [2024-11-20 06:32:46.389145] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:14.702 [2024-11-20 06:32:46.389221] bdev_nvme.c:6622:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:22:14.702 request: 00:22:14.702 { 00:22:14.702 "name": "TLSTEST", 00:22:14.702 "trtype": "tcp", 00:22:14.702 "traddr": "10.0.0.2", 00:22:14.702 "adrfam": "ipv4", 00:22:14.702 "trsvcid": "4420", 00:22:14.702 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:14.702 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:14.702 "prchk_reftag": false, 00:22:14.702 "prchk_guard": false, 00:22:14.702 "hdgst": false, 00:22:14.702 "ddgst": false, 00:22:14.702 "psk": "key0", 00:22:14.702 "allow_unrecognized_csi": false, 00:22:14.702 "method": "bdev_nvme_attach_controller", 00:22:14.702 "req_id": 1 00:22:14.702 } 00:22:14.702 Got JSON-RPC error response 00:22:14.702 response: 00:22:14.702 { 00:22:14.702 "code": -126, 00:22:14.702 "message": "Required key not available" 00:22:14.702 } 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2114924 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2114924 ']' 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2114924 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2114924 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2114924' 00:22:14.702 killing process with pid 2114924 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2114924 00:22:14.702 Received shutdown signal, test time was about 10.000000 seconds 00:22:14.702 00:22:14.702 Latency(us) 00:22:14.702 [2024-11-20T05:32:46.538Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:14.702 [2024-11-20T05:32:46.538Z] =================================================================================================================== 00:22:14.702 [2024-11-20T05:32:46.538Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:14.702 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2114924 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 2113317 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2113317 ']' 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2113317 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2113317 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2113317' 00:22:14.962 killing process with pid 2113317 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2113317 00:22:14.962 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2113317 00:22:15.261 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:22:15.261 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:15.261 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:15.261 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.261 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2115077 00:22:15.261 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:15.261 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2115077 00:22:15.261 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2115077 ']' 00:22:15.261 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:15.262 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:15.262 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:15.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:15.262 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:15.262 06:32:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.262 [2024-11-20 06:32:47.015034] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:15.262 [2024-11-20 06:32:47.015112] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:15.546 [2024-11-20 06:32:47.091557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.546 [2024-11-20 06:32:47.151124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:15.546 [2024-11-20 06:32:47.151191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:15.546 [2024-11-20 06:32:47.151204] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:15.547 [2024-11-20 06:32:47.151215] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:15.547 [2024-11-20 06:32:47.151224] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:15.547 [2024-11-20 06:32:47.151850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.jMKaJyqxjB 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.jMKaJyqxjB 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.jMKaJyqxjB 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jMKaJyqxjB 00:22:15.547 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:15.805 [2024-11-20 06:32:47.596125] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:15.805 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:16.370 06:32:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:16.370 [2024-11-20 06:32:48.161688] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:16.370 [2024-11-20 06:32:48.161916] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:16.370 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:16.628 malloc0 00:22:16.628 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:17.194 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jMKaJyqxjB 00:22:17.194 [2024-11-20 06:32:48.966329] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.jMKaJyqxjB': 0100666 00:22:17.194 [2024-11-20 06:32:48.966369] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:22:17.194 request: 00:22:17.194 { 00:22:17.194 "name": "key0", 00:22:17.194 "path": "/tmp/tmp.jMKaJyqxjB", 00:22:17.194 "method": "keyring_file_add_key", 00:22:17.194 "req_id": 1 00:22:17.194 } 00:22:17.194 Got JSON-RPC error response 00:22:17.194 response: 00:22:17.194 { 00:22:17.194 "code": -1, 00:22:17.194 "message": "Operation not permitted" 00:22:17.194 } 00:22:17.194 06:32:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:17.452 [2024-11-20 06:32:49.227043] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:22:17.452 [2024-11-20 06:32:49.227092] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:22:17.452 request: 00:22:17.452 { 00:22:17.452 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:17.452 "host": "nqn.2016-06.io.spdk:host1", 00:22:17.452 "psk": "key0", 00:22:17.452 "method": "nvmf_subsystem_add_host", 00:22:17.452 "req_id": 1 00:22:17.452 } 00:22:17.452 Got JSON-RPC error response 00:22:17.452 response: 00:22:17.452 { 00:22:17.452 "code": -32603, 00:22:17.452 "message": "Internal error" 00:22:17.452 } 00:22:17.452 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:22:17.452 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:17.452 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:17.452 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:17.452 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 2115077 00:22:17.452 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2115077 ']' 00:22:17.452 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2115077 00:22:17.452 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:17.452 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:17.452 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2115077 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2115077' 00:22:17.710 killing process with pid 2115077 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2115077 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2115077 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.jMKaJyqxjB 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2115384 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2115384 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2115384 ']' 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:17.710 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:17.968 [2024-11-20 06:32:49.562116] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:17.968 [2024-11-20 06:32:49.562187] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:17.968 [2024-11-20 06:32:49.632530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:17.968 [2024-11-20 06:32:49.688236] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:17.968 [2024-11-20 06:32:49.688287] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:17.968 [2024-11-20 06:32:49.688315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:17.968 [2024-11-20 06:32:49.688344] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:17.968 [2024-11-20 06:32:49.688354] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:17.968 [2024-11-20 06:32:49.688920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:17.968 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:17.968 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:17.968 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:17.968 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:17.968 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:18.226 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.226 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.jMKaJyqxjB 00:22:18.226 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jMKaJyqxjB 00:22:18.226 06:32:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:18.484 [2024-11-20 06:32:50.068544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.484 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:18.742 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:19.000 [2024-11-20 06:32:50.613972] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:19.000 [2024-11-20 06:32:50.614212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.000 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:19.259 malloc0 00:22:19.259 06:32:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:19.517 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jMKaJyqxjB 00:22:19.774 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:20.033 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=2115668 00:22:20.033 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:20.033 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:20.033 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 2115668 /var/tmp/bdevperf.sock 00:22:20.033 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2115668 ']' 00:22:20.033 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:20.033 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:20.033 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:20.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:20.033 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:20.033 06:32:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:20.033 [2024-11-20 06:32:51.754806] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:20.033 [2024-11-20 06:32:51.754896] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2115668 ] 00:22:20.033 [2024-11-20 06:32:51.823144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:20.291 [2024-11-20 06:32:51.885118] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.291 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:20.291 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:20.291 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jMKaJyqxjB 00:22:20.549 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:20.807 [2024-11-20 06:32:52.531189] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:20.807 TLSTESTn1 00:22:20.807 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:22:21.373 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:22:21.373 "subsystems": [ 00:22:21.373 { 00:22:21.373 "subsystem": "keyring", 00:22:21.373 "config": [ 00:22:21.373 { 00:22:21.373 "method": "keyring_file_add_key", 00:22:21.373 "params": { 00:22:21.373 "name": "key0", 00:22:21.373 "path": "/tmp/tmp.jMKaJyqxjB" 00:22:21.373 } 00:22:21.373 } 00:22:21.373 ] 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "subsystem": "iobuf", 00:22:21.373 "config": [ 00:22:21.373 { 00:22:21.373 "method": "iobuf_set_options", 00:22:21.373 "params": { 00:22:21.373 "small_pool_count": 8192, 00:22:21.373 "large_pool_count": 1024, 00:22:21.373 "small_bufsize": 8192, 00:22:21.373 "large_bufsize": 135168, 00:22:21.373 "enable_numa": false 00:22:21.373 } 00:22:21.373 } 00:22:21.373 ] 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "subsystem": "sock", 00:22:21.373 "config": [ 00:22:21.373 { 00:22:21.373 "method": "sock_set_default_impl", 00:22:21.373 "params": { 00:22:21.373 "impl_name": "posix" 00:22:21.373 } 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "method": "sock_impl_set_options", 00:22:21.373 "params": { 00:22:21.373 "impl_name": "ssl", 00:22:21.373 "recv_buf_size": 4096, 00:22:21.373 "send_buf_size": 4096, 00:22:21.373 "enable_recv_pipe": true, 00:22:21.373 "enable_quickack": false, 00:22:21.373 "enable_placement_id": 0, 00:22:21.373 "enable_zerocopy_send_server": true, 00:22:21.373 "enable_zerocopy_send_client": false, 00:22:21.373 "zerocopy_threshold": 0, 00:22:21.373 "tls_version": 0, 00:22:21.373 "enable_ktls": false 00:22:21.373 } 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "method": "sock_impl_set_options", 00:22:21.373 "params": { 00:22:21.373 "impl_name": "posix", 00:22:21.373 "recv_buf_size": 2097152, 00:22:21.373 "send_buf_size": 2097152, 00:22:21.373 "enable_recv_pipe": true, 00:22:21.373 "enable_quickack": false, 00:22:21.373 "enable_placement_id": 0, 00:22:21.373 "enable_zerocopy_send_server": true, 00:22:21.373 "enable_zerocopy_send_client": false, 00:22:21.373 "zerocopy_threshold": 0, 00:22:21.373 "tls_version": 0, 00:22:21.373 "enable_ktls": false 00:22:21.373 } 00:22:21.373 } 00:22:21.373 ] 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "subsystem": "vmd", 00:22:21.373 "config": [] 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "subsystem": "accel", 00:22:21.373 "config": [ 00:22:21.373 { 00:22:21.373 "method": "accel_set_options", 00:22:21.373 "params": { 00:22:21.373 "small_cache_size": 128, 00:22:21.373 "large_cache_size": 16, 00:22:21.373 "task_count": 2048, 00:22:21.373 "sequence_count": 2048, 00:22:21.373 "buf_count": 2048 00:22:21.373 } 00:22:21.373 } 00:22:21.373 ] 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "subsystem": "bdev", 00:22:21.373 "config": [ 00:22:21.373 { 00:22:21.373 "method": "bdev_set_options", 00:22:21.373 "params": { 00:22:21.373 "bdev_io_pool_size": 65535, 00:22:21.373 "bdev_io_cache_size": 256, 00:22:21.373 "bdev_auto_examine": true, 00:22:21.373 "iobuf_small_cache_size": 128, 00:22:21.373 "iobuf_large_cache_size": 16 00:22:21.373 } 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "method": "bdev_raid_set_options", 00:22:21.373 "params": { 00:22:21.373 "process_window_size_kb": 1024, 00:22:21.373 "process_max_bandwidth_mb_sec": 0 00:22:21.373 } 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "method": "bdev_iscsi_set_options", 00:22:21.373 "params": { 00:22:21.373 "timeout_sec": 30 00:22:21.373 } 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "method": "bdev_nvme_set_options", 00:22:21.373 "params": { 00:22:21.373 "action_on_timeout": "none", 00:22:21.373 "timeout_us": 0, 00:22:21.373 "timeout_admin_us": 0, 00:22:21.373 "keep_alive_timeout_ms": 10000, 00:22:21.373 "arbitration_burst": 0, 00:22:21.373 "low_priority_weight": 0, 00:22:21.373 "medium_priority_weight": 0, 00:22:21.373 "high_priority_weight": 0, 00:22:21.373 "nvme_adminq_poll_period_us": 10000, 00:22:21.373 "nvme_ioq_poll_period_us": 0, 00:22:21.373 "io_queue_requests": 0, 00:22:21.373 "delay_cmd_submit": true, 00:22:21.373 "transport_retry_count": 4, 00:22:21.373 "bdev_retry_count": 3, 00:22:21.373 "transport_ack_timeout": 0, 00:22:21.373 "ctrlr_loss_timeout_sec": 0, 00:22:21.373 "reconnect_delay_sec": 0, 00:22:21.373 "fast_io_fail_timeout_sec": 0, 00:22:21.373 "disable_auto_failback": false, 00:22:21.373 "generate_uuids": false, 00:22:21.373 "transport_tos": 0, 00:22:21.373 "nvme_error_stat": false, 00:22:21.373 "rdma_srq_size": 0, 00:22:21.373 "io_path_stat": false, 00:22:21.373 "allow_accel_sequence": false, 00:22:21.373 "rdma_max_cq_size": 0, 00:22:21.373 "rdma_cm_event_timeout_ms": 0, 00:22:21.373 "dhchap_digests": [ 00:22:21.373 "sha256", 00:22:21.373 "sha384", 00:22:21.373 "sha512" 00:22:21.373 ], 00:22:21.373 "dhchap_dhgroups": [ 00:22:21.373 "null", 00:22:21.373 "ffdhe2048", 00:22:21.373 "ffdhe3072", 00:22:21.373 "ffdhe4096", 00:22:21.373 "ffdhe6144", 00:22:21.373 "ffdhe8192" 00:22:21.373 ] 00:22:21.373 } 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "method": "bdev_nvme_set_hotplug", 00:22:21.373 "params": { 00:22:21.373 "period_us": 100000, 00:22:21.373 "enable": false 00:22:21.373 } 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "method": "bdev_malloc_create", 00:22:21.373 "params": { 00:22:21.373 "name": "malloc0", 00:22:21.373 "num_blocks": 8192, 00:22:21.373 "block_size": 4096, 00:22:21.373 "physical_block_size": 4096, 00:22:21.373 "uuid": "51519c5e-a3d0-465b-8c62-32180d6ea0dc", 00:22:21.373 "optimal_io_boundary": 0, 00:22:21.373 "md_size": 0, 00:22:21.373 "dif_type": 0, 00:22:21.373 "dif_is_head_of_md": false, 00:22:21.373 "dif_pi_format": 0 00:22:21.373 } 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "method": "bdev_wait_for_examine" 00:22:21.373 } 00:22:21.373 ] 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "subsystem": "nbd", 00:22:21.373 "config": [] 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "subsystem": "scheduler", 00:22:21.373 "config": [ 00:22:21.373 { 00:22:21.373 "method": "framework_set_scheduler", 00:22:21.373 "params": { 00:22:21.373 "name": "static" 00:22:21.373 } 00:22:21.373 } 00:22:21.373 ] 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "subsystem": "nvmf", 00:22:21.373 "config": [ 00:22:21.373 { 00:22:21.373 "method": "nvmf_set_config", 00:22:21.373 "params": { 00:22:21.373 "discovery_filter": "match_any", 00:22:21.373 "admin_cmd_passthru": { 00:22:21.373 "identify_ctrlr": false 00:22:21.373 }, 00:22:21.373 "dhchap_digests": [ 00:22:21.373 "sha256", 00:22:21.373 "sha384", 00:22:21.373 "sha512" 00:22:21.373 ], 00:22:21.373 "dhchap_dhgroups": [ 00:22:21.373 "null", 00:22:21.373 "ffdhe2048", 00:22:21.373 "ffdhe3072", 00:22:21.373 "ffdhe4096", 00:22:21.373 "ffdhe6144", 00:22:21.373 "ffdhe8192" 00:22:21.373 ] 00:22:21.373 } 00:22:21.373 }, 00:22:21.373 { 00:22:21.373 "method": "nvmf_set_max_subsystems", 00:22:21.373 "params": { 00:22:21.373 "max_subsystems": 1024 00:22:21.373 } 00:22:21.373 }, 00:22:21.374 { 00:22:21.374 "method": "nvmf_set_crdt", 00:22:21.374 "params": { 00:22:21.374 "crdt1": 0, 00:22:21.374 "crdt2": 0, 00:22:21.374 "crdt3": 0 00:22:21.374 } 00:22:21.374 }, 00:22:21.374 { 00:22:21.374 "method": "nvmf_create_transport", 00:22:21.374 "params": { 00:22:21.374 "trtype": "TCP", 00:22:21.374 "max_queue_depth": 128, 00:22:21.374 "max_io_qpairs_per_ctrlr": 127, 00:22:21.374 "in_capsule_data_size": 4096, 00:22:21.374 "max_io_size": 131072, 00:22:21.374 "io_unit_size": 131072, 00:22:21.374 "max_aq_depth": 128, 00:22:21.374 "num_shared_buffers": 511, 00:22:21.374 "buf_cache_size": 4294967295, 00:22:21.374 "dif_insert_or_strip": false, 00:22:21.374 "zcopy": false, 00:22:21.374 "c2h_success": false, 00:22:21.374 "sock_priority": 0, 00:22:21.374 "abort_timeout_sec": 1, 00:22:21.374 "ack_timeout": 0, 00:22:21.374 "data_wr_pool_size": 0 00:22:21.374 } 00:22:21.374 }, 00:22:21.374 { 00:22:21.374 "method": "nvmf_create_subsystem", 00:22:21.374 "params": { 00:22:21.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.374 "allow_any_host": false, 00:22:21.374 "serial_number": "SPDK00000000000001", 00:22:21.374 "model_number": "SPDK bdev Controller", 00:22:21.374 "max_namespaces": 10, 00:22:21.374 "min_cntlid": 1, 00:22:21.374 "max_cntlid": 65519, 00:22:21.374 "ana_reporting": false 00:22:21.374 } 00:22:21.374 }, 00:22:21.374 { 00:22:21.374 "method": "nvmf_subsystem_add_host", 00:22:21.374 "params": { 00:22:21.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.374 "host": "nqn.2016-06.io.spdk:host1", 00:22:21.374 "psk": "key0" 00:22:21.374 } 00:22:21.374 }, 00:22:21.374 { 00:22:21.374 "method": "nvmf_subsystem_add_ns", 00:22:21.374 "params": { 00:22:21.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.374 "namespace": { 00:22:21.374 "nsid": 1, 00:22:21.374 "bdev_name": "malloc0", 00:22:21.374 "nguid": "51519C5EA3D0465B8C6232180D6EA0DC", 00:22:21.374 "uuid": "51519c5e-a3d0-465b-8c62-32180d6ea0dc", 00:22:21.374 "no_auto_visible": false 00:22:21.374 } 00:22:21.374 } 00:22:21.374 }, 00:22:21.374 { 00:22:21.374 "method": "nvmf_subsystem_add_listener", 00:22:21.374 "params": { 00:22:21.374 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.374 "listen_address": { 00:22:21.374 "trtype": "TCP", 00:22:21.374 "adrfam": "IPv4", 00:22:21.374 "traddr": "10.0.0.2", 00:22:21.374 "trsvcid": "4420" 00:22:21.374 }, 00:22:21.374 "secure_channel": true 00:22:21.374 } 00:22:21.374 } 00:22:21.374 ] 00:22:21.374 } 00:22:21.374 ] 00:22:21.374 }' 00:22:21.374 06:32:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:21.632 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:22:21.632 "subsystems": [ 00:22:21.632 { 00:22:21.632 "subsystem": "keyring", 00:22:21.632 "config": [ 00:22:21.632 { 00:22:21.632 "method": "keyring_file_add_key", 00:22:21.632 "params": { 00:22:21.632 "name": "key0", 00:22:21.632 "path": "/tmp/tmp.jMKaJyqxjB" 00:22:21.632 } 00:22:21.632 } 00:22:21.632 ] 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "subsystem": "iobuf", 00:22:21.632 "config": [ 00:22:21.632 { 00:22:21.632 "method": "iobuf_set_options", 00:22:21.632 "params": { 00:22:21.632 "small_pool_count": 8192, 00:22:21.632 "large_pool_count": 1024, 00:22:21.632 "small_bufsize": 8192, 00:22:21.632 "large_bufsize": 135168, 00:22:21.632 "enable_numa": false 00:22:21.632 } 00:22:21.632 } 00:22:21.632 ] 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "subsystem": "sock", 00:22:21.632 "config": [ 00:22:21.632 { 00:22:21.632 "method": "sock_set_default_impl", 00:22:21.632 "params": { 00:22:21.632 "impl_name": "posix" 00:22:21.632 } 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "method": "sock_impl_set_options", 00:22:21.632 "params": { 00:22:21.632 "impl_name": "ssl", 00:22:21.632 "recv_buf_size": 4096, 00:22:21.632 "send_buf_size": 4096, 00:22:21.632 "enable_recv_pipe": true, 00:22:21.632 "enable_quickack": false, 00:22:21.632 "enable_placement_id": 0, 00:22:21.632 "enable_zerocopy_send_server": true, 00:22:21.632 "enable_zerocopy_send_client": false, 00:22:21.632 "zerocopy_threshold": 0, 00:22:21.632 "tls_version": 0, 00:22:21.632 "enable_ktls": false 00:22:21.632 } 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "method": "sock_impl_set_options", 00:22:21.632 "params": { 00:22:21.632 "impl_name": "posix", 00:22:21.632 "recv_buf_size": 2097152, 00:22:21.632 "send_buf_size": 2097152, 00:22:21.632 "enable_recv_pipe": true, 00:22:21.632 "enable_quickack": false, 00:22:21.632 "enable_placement_id": 0, 00:22:21.632 "enable_zerocopy_send_server": true, 00:22:21.632 "enable_zerocopy_send_client": false, 00:22:21.632 "zerocopy_threshold": 0, 00:22:21.632 "tls_version": 0, 00:22:21.632 "enable_ktls": false 00:22:21.632 } 00:22:21.632 } 00:22:21.632 ] 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "subsystem": "vmd", 00:22:21.632 "config": [] 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "subsystem": "accel", 00:22:21.632 "config": [ 00:22:21.632 { 00:22:21.632 "method": "accel_set_options", 00:22:21.632 "params": { 00:22:21.632 "small_cache_size": 128, 00:22:21.632 "large_cache_size": 16, 00:22:21.632 "task_count": 2048, 00:22:21.632 "sequence_count": 2048, 00:22:21.632 "buf_count": 2048 00:22:21.632 } 00:22:21.632 } 00:22:21.632 ] 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "subsystem": "bdev", 00:22:21.632 "config": [ 00:22:21.632 { 00:22:21.632 "method": "bdev_set_options", 00:22:21.632 "params": { 00:22:21.632 "bdev_io_pool_size": 65535, 00:22:21.632 "bdev_io_cache_size": 256, 00:22:21.632 "bdev_auto_examine": true, 00:22:21.632 "iobuf_small_cache_size": 128, 00:22:21.632 "iobuf_large_cache_size": 16 00:22:21.632 } 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "method": "bdev_raid_set_options", 00:22:21.632 "params": { 00:22:21.632 "process_window_size_kb": 1024, 00:22:21.632 "process_max_bandwidth_mb_sec": 0 00:22:21.632 } 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "method": "bdev_iscsi_set_options", 00:22:21.632 "params": { 00:22:21.632 "timeout_sec": 30 00:22:21.632 } 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "method": "bdev_nvme_set_options", 00:22:21.632 "params": { 00:22:21.632 "action_on_timeout": "none", 00:22:21.632 "timeout_us": 0, 00:22:21.632 "timeout_admin_us": 0, 00:22:21.632 "keep_alive_timeout_ms": 10000, 00:22:21.632 "arbitration_burst": 0, 00:22:21.632 "low_priority_weight": 0, 00:22:21.632 "medium_priority_weight": 0, 00:22:21.632 "high_priority_weight": 0, 00:22:21.632 "nvme_adminq_poll_period_us": 10000, 00:22:21.632 "nvme_ioq_poll_period_us": 0, 00:22:21.632 "io_queue_requests": 512, 00:22:21.632 "delay_cmd_submit": true, 00:22:21.632 "transport_retry_count": 4, 00:22:21.632 "bdev_retry_count": 3, 00:22:21.632 "transport_ack_timeout": 0, 00:22:21.632 "ctrlr_loss_timeout_sec": 0, 00:22:21.632 "reconnect_delay_sec": 0, 00:22:21.632 "fast_io_fail_timeout_sec": 0, 00:22:21.632 "disable_auto_failback": false, 00:22:21.632 "generate_uuids": false, 00:22:21.632 "transport_tos": 0, 00:22:21.632 "nvme_error_stat": false, 00:22:21.632 "rdma_srq_size": 0, 00:22:21.632 "io_path_stat": false, 00:22:21.632 "allow_accel_sequence": false, 00:22:21.632 "rdma_max_cq_size": 0, 00:22:21.632 "rdma_cm_event_timeout_ms": 0, 00:22:21.632 "dhchap_digests": [ 00:22:21.632 "sha256", 00:22:21.632 "sha384", 00:22:21.632 "sha512" 00:22:21.632 ], 00:22:21.632 "dhchap_dhgroups": [ 00:22:21.632 "null", 00:22:21.632 "ffdhe2048", 00:22:21.632 "ffdhe3072", 00:22:21.632 "ffdhe4096", 00:22:21.632 "ffdhe6144", 00:22:21.632 "ffdhe8192" 00:22:21.632 ] 00:22:21.632 } 00:22:21.632 }, 00:22:21.632 { 00:22:21.632 "method": "bdev_nvme_attach_controller", 00:22:21.632 "params": { 00:22:21.632 "name": "TLSTEST", 00:22:21.632 "trtype": "TCP", 00:22:21.632 "adrfam": "IPv4", 00:22:21.632 "traddr": "10.0.0.2", 00:22:21.632 "trsvcid": "4420", 00:22:21.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:21.632 "prchk_reftag": false, 00:22:21.632 "prchk_guard": false, 00:22:21.633 "ctrlr_loss_timeout_sec": 0, 00:22:21.633 "reconnect_delay_sec": 0, 00:22:21.633 "fast_io_fail_timeout_sec": 0, 00:22:21.633 "psk": "key0", 00:22:21.633 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:21.633 "hdgst": false, 00:22:21.633 "ddgst": false, 00:22:21.633 "multipath": "multipath" 00:22:21.633 } 00:22:21.633 }, 00:22:21.633 { 00:22:21.633 "method": "bdev_nvme_set_hotplug", 00:22:21.633 "params": { 00:22:21.633 "period_us": 100000, 00:22:21.633 "enable": false 00:22:21.633 } 00:22:21.633 }, 00:22:21.633 { 00:22:21.633 "method": "bdev_wait_for_examine" 00:22:21.633 } 00:22:21.633 ] 00:22:21.633 }, 00:22:21.633 { 00:22:21.633 "subsystem": "nbd", 00:22:21.633 "config": [] 00:22:21.633 } 00:22:21.633 ] 00:22:21.633 }' 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 2115668 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2115668 ']' 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2115668 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2115668 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2115668' 00:22:21.633 killing process with pid 2115668 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2115668 00:22:21.633 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.633 00:22:21.633 Latency(us) 00:22:21.633 [2024-11-20T05:32:53.469Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.633 [2024-11-20T05:32:53.469Z] =================================================================================================================== 00:22:21.633 [2024-11-20T05:32:53.469Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:22:21.633 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2115668 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 2115384 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2115384 ']' 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2115384 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2115384 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2115384' 00:22:21.891 killing process with pid 2115384 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2115384 00:22:21.891 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2115384 00:22:22.150 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:22:22.150 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:22.150 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:22.150 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:22:22.150 "subsystems": [ 00:22:22.150 { 00:22:22.150 "subsystem": "keyring", 00:22:22.150 "config": [ 00:22:22.150 { 00:22:22.150 "method": "keyring_file_add_key", 00:22:22.150 "params": { 00:22:22.150 "name": "key0", 00:22:22.150 "path": "/tmp/tmp.jMKaJyqxjB" 00:22:22.150 } 00:22:22.150 } 00:22:22.150 ] 00:22:22.150 }, 00:22:22.150 { 00:22:22.150 "subsystem": "iobuf", 00:22:22.150 "config": [ 00:22:22.150 { 00:22:22.150 "method": "iobuf_set_options", 00:22:22.150 "params": { 00:22:22.150 "small_pool_count": 8192, 00:22:22.150 "large_pool_count": 1024, 00:22:22.150 "small_bufsize": 8192, 00:22:22.150 "large_bufsize": 135168, 00:22:22.150 "enable_numa": false 00:22:22.150 } 00:22:22.150 } 00:22:22.150 ] 00:22:22.150 }, 00:22:22.150 { 00:22:22.150 "subsystem": "sock", 00:22:22.150 "config": [ 00:22:22.150 { 00:22:22.150 "method": "sock_set_default_impl", 00:22:22.150 "params": { 00:22:22.150 "impl_name": "posix" 00:22:22.150 } 00:22:22.150 }, 00:22:22.150 { 00:22:22.150 "method": "sock_impl_set_options", 00:22:22.150 "params": { 00:22:22.150 "impl_name": "ssl", 00:22:22.150 "recv_buf_size": 4096, 00:22:22.150 "send_buf_size": 4096, 00:22:22.150 "enable_recv_pipe": true, 00:22:22.150 "enable_quickack": false, 00:22:22.150 "enable_placement_id": 0, 00:22:22.150 "enable_zerocopy_send_server": true, 00:22:22.150 "enable_zerocopy_send_client": false, 00:22:22.150 "zerocopy_threshold": 0, 00:22:22.150 "tls_version": 0, 00:22:22.150 "enable_ktls": false 00:22:22.150 } 00:22:22.150 }, 00:22:22.150 { 00:22:22.150 "method": "sock_impl_set_options", 00:22:22.150 "params": { 00:22:22.150 "impl_name": "posix", 00:22:22.151 "recv_buf_size": 2097152, 00:22:22.151 "send_buf_size": 2097152, 00:22:22.151 "enable_recv_pipe": true, 00:22:22.151 "enable_quickack": false, 00:22:22.151 "enable_placement_id": 0, 00:22:22.151 "enable_zerocopy_send_server": true, 00:22:22.151 "enable_zerocopy_send_client": false, 00:22:22.151 "zerocopy_threshold": 0, 00:22:22.151 "tls_version": 0, 00:22:22.151 "enable_ktls": false 00:22:22.151 } 00:22:22.151 } 00:22:22.151 ] 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "subsystem": "vmd", 00:22:22.151 "config": [] 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "subsystem": "accel", 00:22:22.151 "config": [ 00:22:22.151 { 00:22:22.151 "method": "accel_set_options", 00:22:22.151 "params": { 00:22:22.151 "small_cache_size": 128, 00:22:22.151 "large_cache_size": 16, 00:22:22.151 "task_count": 2048, 00:22:22.151 "sequence_count": 2048, 00:22:22.151 "buf_count": 2048 00:22:22.151 } 00:22:22.151 } 00:22:22.151 ] 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "subsystem": "bdev", 00:22:22.151 "config": [ 00:22:22.151 { 00:22:22.151 "method": "bdev_set_options", 00:22:22.151 "params": { 00:22:22.151 "bdev_io_pool_size": 65535, 00:22:22.151 "bdev_io_cache_size": 256, 00:22:22.151 "bdev_auto_examine": true, 00:22:22.151 "iobuf_small_cache_size": 128, 00:22:22.151 "iobuf_large_cache_size": 16 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "bdev_raid_set_options", 00:22:22.151 "params": { 00:22:22.151 "process_window_size_kb": 1024, 00:22:22.151 "process_max_bandwidth_mb_sec": 0 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "bdev_iscsi_set_options", 00:22:22.151 "params": { 00:22:22.151 "timeout_sec": 30 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "bdev_nvme_set_options", 00:22:22.151 "params": { 00:22:22.151 "action_on_timeout": "none", 00:22:22.151 "timeout_us": 0, 00:22:22.151 "timeout_admin_us": 0, 00:22:22.151 "keep_alive_timeout_ms": 10000, 00:22:22.151 "arbitration_burst": 0, 00:22:22.151 "low_priority_weight": 0, 00:22:22.151 "medium_priority_weight": 0, 00:22:22.151 "high_priority_weight": 0, 00:22:22.151 "nvme_adminq_poll_period_us": 10000, 00:22:22.151 "nvme_ioq_poll_period_us": 0, 00:22:22.151 "io_queue_requests": 0, 00:22:22.151 "delay_cmd_submit": true, 00:22:22.151 "transport_retry_count": 4, 00:22:22.151 "bdev_retry_count": 3, 00:22:22.151 "transport_ack_timeout": 0, 00:22:22.151 "ctrlr_loss_timeout_sec": 0, 00:22:22.151 "reconnect_delay_sec": 0, 00:22:22.151 "fast_io_fail_timeout_sec": 0, 00:22:22.151 "disable_auto_failback": false, 00:22:22.151 "generate_uuids": false, 00:22:22.151 "transport_tos": 0, 00:22:22.151 "nvme_error_stat": false, 00:22:22.151 "rdma_srq_size": 0, 00:22:22.151 "io_path_stat": false, 00:22:22.151 "allow_accel_sequence": false, 00:22:22.151 "rdma_max_cq_size": 0, 00:22:22.151 "rdma_cm_event_timeout_ms": 0, 00:22:22.151 "dhchap_digests": [ 00:22:22.151 "sha256", 00:22:22.151 "sha384", 00:22:22.151 "sha512" 00:22:22.151 ], 00:22:22.151 "dhchap_dhgroups": [ 00:22:22.151 "null", 00:22:22.151 "ffdhe2048", 00:22:22.151 "ffdhe3072", 00:22:22.151 "ffdhe4096", 00:22:22.151 "ffdhe6144", 00:22:22.151 "ffdhe8192" 00:22:22.151 ] 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "bdev_nvme_set_hotplug", 00:22:22.151 "params": { 00:22:22.151 "period_us": 100000, 00:22:22.151 "enable": false 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "bdev_malloc_create", 00:22:22.151 "params": { 00:22:22.151 "name": "malloc0", 00:22:22.151 "num_blocks": 8192, 00:22:22.151 "block_size": 4096, 00:22:22.151 "physical_block_size": 4096, 00:22:22.151 "uuid": "51519c5e-a3d0-465b-8c62-32180d6ea0dc", 00:22:22.151 "optimal_io_boundary": 0, 00:22:22.151 "md_size": 0, 00:22:22.151 "dif_type": 0, 00:22:22.151 "dif_is_head_of_md": false, 00:22:22.151 "dif_pi_format": 0 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "bdev_wait_for_examine" 00:22:22.151 } 00:22:22.151 ] 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "subsystem": "nbd", 00:22:22.151 "config": [] 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "subsystem": "scheduler", 00:22:22.151 "config": [ 00:22:22.151 { 00:22:22.151 "method": "framework_set_scheduler", 00:22:22.151 "params": { 00:22:22.151 "name": "static" 00:22:22.151 } 00:22:22.151 } 00:22:22.151 ] 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "subsystem": "nvmf", 00:22:22.151 "config": [ 00:22:22.151 { 00:22:22.151 "method": "nvmf_set_config", 00:22:22.151 "params": { 00:22:22.151 "discovery_filter": "match_any", 00:22:22.151 "admin_cmd_passthru": { 00:22:22.151 "identify_ctrlr": false 00:22:22.151 }, 00:22:22.151 "dhchap_digests": [ 00:22:22.151 "sha256", 00:22:22.151 "sha384", 00:22:22.151 "sha512" 00:22:22.151 ], 00:22:22.151 "dhchap_dhgroups": [ 00:22:22.151 "null", 00:22:22.151 "ffdhe2048", 00:22:22.151 "ffdhe3072", 00:22:22.151 "ffdhe4096", 00:22:22.151 "ffdhe6144", 00:22:22.151 "ffdhe8192" 00:22:22.151 ] 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "nvmf_set_max_subsystems", 00:22:22.151 "params": { 00:22:22.151 "max_subsystems": 1024 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "nvmf_set_crdt", 00:22:22.151 "params": { 00:22:22.151 "crdt1": 0, 00:22:22.151 "crdt2": 0, 00:22:22.151 "crdt3": 0 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "nvmf_create_transport", 00:22:22.151 "params": { 00:22:22.151 "trtype": "TCP", 00:22:22.151 "max_queue_depth": 128, 00:22:22.151 "max_io_qpairs_per_ctrlr": 127, 00:22:22.151 "in_capsule_data_size": 4096, 00:22:22.151 "max_io_size": 131072, 00:22:22.151 "io_unit_size": 131072, 00:22:22.151 "max_aq_depth": 128, 00:22:22.151 "num_shared_buffers": 511, 00:22:22.151 "buf_cache_size": 4294967295, 00:22:22.151 "dif_insert_or_strip": false, 00:22:22.151 "zcopy": false, 00:22:22.151 "c2h_success": false, 00:22:22.151 "sock_priority": 0, 00:22:22.151 "abort_timeout_sec": 1, 00:22:22.151 "ack_timeout": 0, 00:22:22.151 "data_wr_pool_size": 0 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "nvmf_create_subsystem", 00:22:22.151 "params": { 00:22:22.151 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.151 "allow_any_host": false, 00:22:22.151 "serial_number": "SPDK00000000000001", 00:22:22.151 "model_number": "SPDK bdev Controller", 00:22:22.151 "max_namespaces": 10, 00:22:22.151 "min_cntlid": 1, 00:22:22.151 "max_cntlid": 65519, 00:22:22.151 "ana_reporting": false 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "nvmf_subsystem_add_host", 00:22:22.151 "params": { 00:22:22.151 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.151 "host": "nqn.2016-06.io.spdk:host1", 00:22:22.151 "psk": "key0" 00:22:22.151 } 00:22:22.151 }, 00:22:22.151 { 00:22:22.151 "method": "nvmf_subsystem_add_ns", 00:22:22.152 "params": { 00:22:22.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.152 "namespace": { 00:22:22.152 "nsid": 1, 00:22:22.152 "bdev_name": "malloc0", 00:22:22.152 "nguid": "51519C5EA3D0465B8C6232180D6EA0DC", 00:22:22.152 "uuid": "51519c5e-a3d0-465b-8c62-32180d6ea0dc", 00:22:22.152 "no_auto_visible": false 00:22:22.152 } 00:22:22.152 } 00:22:22.152 }, 00:22:22.152 { 00:22:22.152 "method": "nvmf_subsystem_add_listener", 00:22:22.152 "params": { 00:22:22.152 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:22.152 "listen_address": { 00:22:22.152 "trtype": "TCP", 00:22:22.152 "adrfam": "IPv4", 00:22:22.152 "traddr": "10.0.0.2", 00:22:22.152 "trsvcid": "4420" 00:22:22.152 }, 00:22:22.152 "secure_channel": true 00:22:22.152 } 00:22:22.152 } 00:22:22.152 ] 00:22:22.152 } 00:22:22.152 ] 00:22:22.152 }' 00:22:22.152 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.152 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2115945 00:22:22.152 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:22:22.152 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2115945 00:22:22.152 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2115945 ']' 00:22:22.152 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:22.152 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:22.152 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:22.152 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:22.152 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:22.152 06:32:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:22.152 [2024-11-20 06:32:53.885619] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:22.152 [2024-11-20 06:32:53.885738] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:22.152 [2024-11-20 06:32:53.955661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:22.410 [2024-11-20 06:32:54.011882] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:22.410 [2024-11-20 06:32:54.011931] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:22.410 [2024-11-20 06:32:54.011953] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:22.410 [2024-11-20 06:32:54.011963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:22.410 [2024-11-20 06:32:54.011972] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:22.410 [2024-11-20 06:32:54.012624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:22.668 [2024-11-20 06:32:54.263824] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:22.668 [2024-11-20 06:32:54.295847] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:22.668 [2024-11-20 06:32:54.296094] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=2116103 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 2116103 /var/tmp/bdevperf.sock 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2116103 ']' 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:23.234 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:22:23.234 "subsystems": [ 00:22:23.234 { 00:22:23.234 "subsystem": "keyring", 00:22:23.234 "config": [ 00:22:23.234 { 00:22:23.234 "method": "keyring_file_add_key", 00:22:23.234 "params": { 00:22:23.234 "name": "key0", 00:22:23.234 "path": "/tmp/tmp.jMKaJyqxjB" 00:22:23.234 } 00:22:23.234 } 00:22:23.234 ] 00:22:23.234 }, 00:22:23.234 { 00:22:23.234 "subsystem": "iobuf", 00:22:23.234 "config": [ 00:22:23.234 { 00:22:23.234 "method": "iobuf_set_options", 00:22:23.234 "params": { 00:22:23.234 "small_pool_count": 8192, 00:22:23.234 "large_pool_count": 1024, 00:22:23.234 "small_bufsize": 8192, 00:22:23.234 "large_bufsize": 135168, 00:22:23.234 "enable_numa": false 00:22:23.234 } 00:22:23.234 } 00:22:23.234 ] 00:22:23.234 }, 00:22:23.234 { 00:22:23.234 "subsystem": "sock", 00:22:23.234 "config": [ 00:22:23.234 { 00:22:23.234 "method": "sock_set_default_impl", 00:22:23.234 "params": { 00:22:23.234 "impl_name": "posix" 00:22:23.234 } 00:22:23.234 }, 00:22:23.234 { 00:22:23.234 "method": "sock_impl_set_options", 00:22:23.234 "params": { 00:22:23.234 "impl_name": "ssl", 00:22:23.234 "recv_buf_size": 4096, 00:22:23.234 "send_buf_size": 4096, 00:22:23.234 "enable_recv_pipe": true, 00:22:23.234 "enable_quickack": false, 00:22:23.234 "enable_placement_id": 0, 00:22:23.234 "enable_zerocopy_send_server": true, 00:22:23.234 "enable_zerocopy_send_client": false, 00:22:23.234 "zerocopy_threshold": 0, 00:22:23.234 "tls_version": 0, 00:22:23.234 "enable_ktls": false 00:22:23.234 } 00:22:23.234 }, 00:22:23.234 { 00:22:23.234 "method": "sock_impl_set_options", 00:22:23.234 "params": { 00:22:23.234 "impl_name": "posix", 00:22:23.234 "recv_buf_size": 2097152, 00:22:23.234 "send_buf_size": 2097152, 00:22:23.234 "enable_recv_pipe": true, 00:22:23.234 "enable_quickack": false, 00:22:23.234 "enable_placement_id": 0, 00:22:23.234 "enable_zerocopy_send_server": true, 00:22:23.234 "enable_zerocopy_send_client": false, 00:22:23.234 "zerocopy_threshold": 0, 00:22:23.234 "tls_version": 0, 00:22:23.234 "enable_ktls": false 00:22:23.234 } 00:22:23.234 } 00:22:23.234 ] 00:22:23.234 }, 00:22:23.234 { 00:22:23.234 "subsystem": "vmd", 00:22:23.234 "config": [] 00:22:23.234 }, 00:22:23.234 { 00:22:23.234 "subsystem": "accel", 00:22:23.234 "config": [ 00:22:23.234 { 00:22:23.234 "method": "accel_set_options", 00:22:23.234 "params": { 00:22:23.234 "small_cache_size": 128, 00:22:23.234 "large_cache_size": 16, 00:22:23.234 "task_count": 2048, 00:22:23.234 "sequence_count": 2048, 00:22:23.234 "buf_count": 2048 00:22:23.234 } 00:22:23.234 } 00:22:23.235 ] 00:22:23.235 }, 00:22:23.235 { 00:22:23.235 "subsystem": "bdev", 00:22:23.235 "config": [ 00:22:23.235 { 00:22:23.235 "method": "bdev_set_options", 00:22:23.235 "params": { 00:22:23.235 "bdev_io_pool_size": 65535, 00:22:23.235 "bdev_io_cache_size": 256, 00:22:23.235 "bdev_auto_examine": true, 00:22:23.235 "iobuf_small_cache_size": 128, 00:22:23.235 "iobuf_large_cache_size": 16 00:22:23.235 } 00:22:23.235 }, 00:22:23.235 { 00:22:23.235 "method": "bdev_raid_set_options", 00:22:23.235 "params": { 00:22:23.235 "process_window_size_kb": 1024, 00:22:23.235 "process_max_bandwidth_mb_sec": 0 00:22:23.235 } 00:22:23.235 }, 00:22:23.235 { 00:22:23.235 "method": "bdev_iscsi_set_options", 00:22:23.235 "params": { 00:22:23.235 "timeout_sec": 30 00:22:23.235 } 00:22:23.235 }, 00:22:23.235 { 00:22:23.235 "method": "bdev_nvme_set_options", 00:22:23.235 "params": { 00:22:23.235 "action_on_timeout": "none", 00:22:23.235 "timeout_us": 0, 00:22:23.235 "timeout_admin_us": 0, 00:22:23.235 "keep_alive_timeout_ms": 10000, 00:22:23.235 "arbitration_burst": 0, 00:22:23.235 "low_priority_weight": 0, 00:22:23.235 "medium_priority_weight": 0, 00:22:23.235 "high_priority_weight": 0, 00:22:23.235 "nvme_adminq_poll_period_us": 10000, 00:22:23.235 "nvme_ioq_poll_period_us": 0, 00:22:23.235 "io_queue_requests": 512, 00:22:23.235 "delay_cmd_submit": true, 00:22:23.235 "transport_retry_count": 4, 00:22:23.235 "bdev_retry_count": 3, 00:22:23.235 "transport_ack_timeout": 0, 00:22:23.235 "ctrlr_loss_timeout_sec": 0, 00:22:23.235 "reconnect_delay_sec": 0, 00:22:23.235 "fast_io_fail_timeout_sec": 0, 00:22:23.235 "disable_auto_failback": false, 00:22:23.235 "generate_uuids": false, 00:22:23.235 "transport_tos": 0, 00:22:23.235 "nvme_error_stat": false, 00:22:23.235 "rdma_srq_size": 0, 00:22:23.235 "io_path_stat": false, 00:22:23.235 "allow_accel_sequence": false, 00:22:23.235 "rdma_max_cq_size": 0, 00:22:23.235 "rdma_cm_event_timeout_ms": 0, 00:22:23.235 "dhchap_digests": [ 00:22:23.235 "sha256", 00:22:23.235 "sha384", 00:22:23.235 "sha512" 00:22:23.235 ], 00:22:23.235 "dhchap_dhgroups": [ 00:22:23.235 "null", 00:22:23.235 "ffdhe2048", 00:22:23.235 "ffdhe3072", 00:22:23.235 "ffdhe4096", 00:22:23.235 "ffdhe6144", 00:22:23.235 "ffdhe8192" 00:22:23.235 ] 00:22:23.235 } 00:22:23.235 }, 00:22:23.235 { 00:22:23.235 "method": "bdev_nvme_attach_controller", 00:22:23.235 "params": { 00:22:23.235 "name": "TLSTEST", 00:22:23.235 "trtype": "TCP", 00:22:23.235 "adrfam": "IPv4", 00:22:23.235 "traddr": "10.0.0.2", 00:22:23.235 "trsvcid": "4420", 00:22:23.235 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:23.235 "prchk_reftag": false, 00:22:23.235 "prchk_guard": false, 00:22:23.235 "ctrlr_loss_timeout_sec": 0, 00:22:23.235 "reconnect_delay_sec": 0, 00:22:23.235 "fast_io_fail_timeout_sec": 0, 00:22:23.235 "psk": "key0", 00:22:23.235 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:23.235 "hdgst": false, 00:22:23.235 "ddgst": false, 00:22:23.235 "multipath": "multipath" 00:22:23.235 } 00:22:23.235 }, 00:22:23.235 { 00:22:23.235 "method": "bdev_nvme_set_hotplug", 00:22:23.235 "params": { 00:22:23.235 "period_us": 100000, 00:22:23.235 "enable": false 00:22:23.235 } 00:22:23.235 }, 00:22:23.235 { 00:22:23.235 "method": "bdev_wait_for_examine" 00:22:23.235 } 00:22:23.235 ] 00:22:23.235 }, 00:22:23.235 { 00:22:23.235 "subsystem": "nbd", 00:22:23.235 "config": [] 00:22:23.235 } 00:22:23.235 ] 00:22:23.235 }' 00:22:23.235 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:23.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:23.235 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:23.235 06:32:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:23.235 [2024-11-20 06:32:55.007373] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:23.235 [2024-11-20 06:32:55.007468] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2116103 ] 00:22:23.494 [2024-11-20 06:32:55.074045] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:23.494 [2024-11-20 06:32:55.131499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:23.494 [2024-11-20 06:32:55.315363] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:23.752 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:23.752 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:23.752 06:32:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:23.752 Running I/O for 10 seconds... 00:22:26.056 3146.00 IOPS, 12.29 MiB/s [2024-11-20T05:32:58.825Z] 3192.50 IOPS, 12.47 MiB/s [2024-11-20T05:32:59.757Z] 3210.00 IOPS, 12.54 MiB/s [2024-11-20T05:33:00.691Z] 3225.00 IOPS, 12.60 MiB/s [2024-11-20T05:33:01.624Z] 3229.40 IOPS, 12.61 MiB/s [2024-11-20T05:33:03.003Z] 3212.00 IOPS, 12.55 MiB/s [2024-11-20T05:33:03.942Z] 3194.00 IOPS, 12.48 MiB/s [2024-11-20T05:33:04.875Z] 3202.88 IOPS, 12.51 MiB/s [2024-11-20T05:33:05.808Z] 3214.44 IOPS, 12.56 MiB/s [2024-11-20T05:33:05.808Z] 3212.90 IOPS, 12.55 MiB/s 00:22:33.972 Latency(us) 00:22:33.972 [2024-11-20T05:33:05.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.972 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:33.972 Verification LBA range: start 0x0 length 0x2000 00:22:33.972 TLSTESTn1 : 10.02 3218.39 12.57 0.00 0.00 39706.10 8641.04 44273.21 00:22:33.972 [2024-11-20T05:33:05.808Z] =================================================================================================================== 00:22:33.972 [2024-11-20T05:33:05.808Z] Total : 3218.39 12.57 0.00 0.00 39706.10 8641.04 44273.21 00:22:33.972 { 00:22:33.972 "results": [ 00:22:33.972 { 00:22:33.972 "job": "TLSTESTn1", 00:22:33.972 "core_mask": "0x4", 00:22:33.972 "workload": "verify", 00:22:33.972 "status": "finished", 00:22:33.972 "verify_range": { 00:22:33.972 "start": 0, 00:22:33.972 "length": 8192 00:22:33.972 }, 00:22:33.972 "queue_depth": 128, 00:22:33.972 "io_size": 4096, 00:22:33.972 "runtime": 10.022413, 00:22:33.972 "iops": 3218.3866300460777, 00:22:33.972 "mibps": 12.571822773617491, 00:22:33.972 "io_failed": 0, 00:22:33.972 "io_timeout": 0, 00:22:33.972 "avg_latency_us": 39706.10342151675, 00:22:33.972 "min_latency_us": 8641.042962962963, 00:22:33.972 "max_latency_us": 44273.20888888889 00:22:33.972 } 00:22:33.972 ], 00:22:33.972 "core_count": 1 00:22:33.972 } 00:22:33.972 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:33.972 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 2116103 00:22:33.972 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2116103 ']' 00:22:33.972 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2116103 00:22:33.972 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:33.972 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:33.972 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2116103 00:22:33.973 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:22:33.973 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:22:33.973 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2116103' 00:22:33.973 killing process with pid 2116103 00:22:33.973 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2116103 00:22:33.973 Received shutdown signal, test time was about 10.000000 seconds 00:22:33.973 00:22:33.973 Latency(us) 00:22:33.973 [2024-11-20T05:33:05.809Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.973 [2024-11-20T05:33:05.809Z] =================================================================================================================== 00:22:33.973 [2024-11-20T05:33:05.809Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:33.973 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2116103 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 2115945 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2115945 ']' 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2115945 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2115945 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2115945' 00:22:34.230 killing process with pid 2115945 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2115945 00:22:34.230 06:33:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2115945 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2117540 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2117540 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2117540 ']' 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:34.489 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.489 [2024-11-20 06:33:06.228144] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:34.489 [2024-11-20 06:33:06.228231] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.489 [2024-11-20 06:33:06.297950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.748 [2024-11-20 06:33:06.355903] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.748 [2024-11-20 06:33:06.355952] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.748 [2024-11-20 06:33:06.355965] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.748 [2024-11-20 06:33:06.355976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.748 [2024-11-20 06:33:06.355986] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.748 [2024-11-20 06:33:06.356561] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:34.748 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:34.748 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:34.748 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:34.748 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:34.748 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.748 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.748 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.jMKaJyqxjB 00:22:34.748 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.jMKaJyqxjB 00:22:34.748 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:35.006 [2024-11-20 06:33:06.835695] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:35.265 06:33:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:35.523 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:35.782 [2024-11-20 06:33:07.385187] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:35.782 [2024-11-20 06:33:07.385474] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:35.782 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:36.040 malloc0 00:22:36.040 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:36.299 06:33:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.jMKaJyqxjB 00:22:36.557 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:36.816 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=2118190 00:22:36.816 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:36.816 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:36.816 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 2118190 /var/tmp/bdevperf.sock 00:22:36.816 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2118190 ']' 00:22:36.816 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:36.816 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:36.816 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:36.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:36.816 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:36.816 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:36.816 [2024-11-20 06:33:08.544252] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:36.816 [2024-11-20 06:33:08.544341] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118190 ] 00:22:36.816 [2024-11-20 06:33:08.609070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:37.074 [2024-11-20 06:33:08.667082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:37.074 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:37.074 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:37.074 06:33:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jMKaJyqxjB 00:22:37.332 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:37.596 [2024-11-20 06:33:09.330494] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:37.596 nvme0n1 00:22:37.596 06:33:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:37.853 Running I/O for 1 seconds... 00:22:38.785 3490.00 IOPS, 13.63 MiB/s 00:22:38.785 Latency(us) 00:22:38.785 [2024-11-20T05:33:10.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.785 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:38.785 Verification LBA range: start 0x0 length 0x2000 00:22:38.785 nvme0n1 : 1.02 3553.72 13.88 0.00 0.00 35682.66 6505.05 46991.74 00:22:38.785 [2024-11-20T05:33:10.621Z] =================================================================================================================== 00:22:38.785 [2024-11-20T05:33:10.621Z] Total : 3553.72 13.88 0.00 0.00 35682.66 6505.05 46991.74 00:22:38.785 { 00:22:38.785 "results": [ 00:22:38.785 { 00:22:38.785 "job": "nvme0n1", 00:22:38.785 "core_mask": "0x2", 00:22:38.785 "workload": "verify", 00:22:38.785 "status": "finished", 00:22:38.785 "verify_range": { 00:22:38.785 "start": 0, 00:22:38.785 "length": 8192 00:22:38.785 }, 00:22:38.785 "queue_depth": 128, 00:22:38.785 "io_size": 4096, 00:22:38.785 "runtime": 1.018088, 00:22:38.785 "iops": 3553.7203070854384, 00:22:38.785 "mibps": 13.881719949552494, 00:22:38.785 "io_failed": 0, 00:22:38.785 "io_timeout": 0, 00:22:38.785 "avg_latency_us": 35682.663384313004, 00:22:38.785 "min_latency_us": 6505.054814814815, 00:22:38.785 "max_latency_us": 46991.73925925926 00:22:38.785 } 00:22:38.785 ], 00:22:38.785 "core_count": 1 00:22:38.785 } 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 2118190 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2118190 ']' 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2118190 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2118190 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2118190' 00:22:38.785 killing process with pid 2118190 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2118190 00:22:38.785 Received shutdown signal, test time was about 1.000000 seconds 00:22:38.785 00:22:38.785 Latency(us) 00:22:38.785 [2024-11-20T05:33:10.621Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:38.785 [2024-11-20T05:33:10.621Z] =================================================================================================================== 00:22:38.785 [2024-11-20T05:33:10.621Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:38.785 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2118190 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 2117540 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2117540 ']' 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2117540 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2117540 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2117540' 00:22:39.043 killing process with pid 2117540 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2117540 00:22:39.043 06:33:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2117540 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2118619 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2118619 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2118619 ']' 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:39.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:39.301 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.559 [2024-11-20 06:33:11.168840] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:39.559 [2024-11-20 06:33:11.168940] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:39.559 [2024-11-20 06:33:11.239782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.559 [2024-11-20 06:33:11.295660] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:39.559 [2024-11-20 06:33:11.295715] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:39.559 [2024-11-20 06:33:11.295737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:39.559 [2024-11-20 06:33:11.295748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:39.559 [2024-11-20 06:33:11.295758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:39.559 [2024-11-20 06:33:11.296319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.817 [2024-11-20 06:33:11.446239] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.817 malloc0 00:22:39.817 [2024-11-20 06:33:11.478643] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:39.817 [2024-11-20 06:33:11.478887] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=2118644 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 2118644 /var/tmp/bdevperf.sock 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2118644 ']' 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:39.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:39.817 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:39.817 [2024-11-20 06:33:11.555528] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:39.817 [2024-11-20 06:33:11.555624] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2118644 ] 00:22:39.817 [2024-11-20 06:33:11.624841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:40.075 [2024-11-20 06:33:11.686018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.075 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:40.075 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:40.075 06:33:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.jMKaJyqxjB 00:22:40.333 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:22:40.898 [2024-11-20 06:33:12.433698] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:40.898 nvme0n1 00:22:40.898 06:33:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:40.898 Running I/O for 1 seconds... 00:22:41.831 3385.00 IOPS, 13.22 MiB/s 00:22:41.831 Latency(us) 00:22:41.831 [2024-11-20T05:33:13.667Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:41.831 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:41.831 Verification LBA range: start 0x0 length 0x2000 00:22:41.831 nvme0n1 : 1.02 3448.51 13.47 0.00 0.00 36789.33 6359.42 35340.89 00:22:41.831 [2024-11-20T05:33:13.667Z] =================================================================================================================== 00:22:41.831 [2024-11-20T05:33:13.667Z] Total : 3448.51 13.47 0.00 0.00 36789.33 6359.42 35340.89 00:22:41.831 { 00:22:41.831 "results": [ 00:22:41.831 { 00:22:41.831 "job": "nvme0n1", 00:22:41.831 "core_mask": "0x2", 00:22:41.831 "workload": "verify", 00:22:41.831 "status": "finished", 00:22:41.831 "verify_range": { 00:22:41.831 "start": 0, 00:22:41.831 "length": 8192 00:22:41.831 }, 00:22:41.831 "queue_depth": 128, 00:22:41.831 "io_size": 4096, 00:22:41.831 "runtime": 1.018701, 00:22:41.831 "iops": 3448.5094252386125, 00:22:41.831 "mibps": 13.47073994233833, 00:22:41.831 "io_failed": 0, 00:22:41.831 "io_timeout": 0, 00:22:41.831 "avg_latency_us": 36789.3322402505, 00:22:41.831 "min_latency_us": 6359.419259259259, 00:22:41.831 "max_latency_us": 35340.89481481481 00:22:41.831 } 00:22:41.831 ], 00:22:41.831 "core_count": 1 00:22:41.831 } 00:22:42.089 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:22:42.089 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.089 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.089 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.089 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:22:42.089 "subsystems": [ 00:22:42.089 { 00:22:42.089 "subsystem": "keyring", 00:22:42.089 "config": [ 00:22:42.089 { 00:22:42.089 "method": "keyring_file_add_key", 00:22:42.089 "params": { 00:22:42.089 "name": "key0", 00:22:42.089 "path": "/tmp/tmp.jMKaJyqxjB" 00:22:42.089 } 00:22:42.089 } 00:22:42.089 ] 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "subsystem": "iobuf", 00:22:42.089 "config": [ 00:22:42.089 { 00:22:42.089 "method": "iobuf_set_options", 00:22:42.089 "params": { 00:22:42.089 "small_pool_count": 8192, 00:22:42.089 "large_pool_count": 1024, 00:22:42.089 "small_bufsize": 8192, 00:22:42.089 "large_bufsize": 135168, 00:22:42.089 "enable_numa": false 00:22:42.089 } 00:22:42.089 } 00:22:42.089 ] 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "subsystem": "sock", 00:22:42.089 "config": [ 00:22:42.089 { 00:22:42.089 "method": "sock_set_default_impl", 00:22:42.089 "params": { 00:22:42.089 "impl_name": "posix" 00:22:42.089 } 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "method": "sock_impl_set_options", 00:22:42.089 "params": { 00:22:42.089 "impl_name": "ssl", 00:22:42.089 "recv_buf_size": 4096, 00:22:42.089 "send_buf_size": 4096, 00:22:42.089 "enable_recv_pipe": true, 00:22:42.089 "enable_quickack": false, 00:22:42.089 "enable_placement_id": 0, 00:22:42.089 "enable_zerocopy_send_server": true, 00:22:42.089 "enable_zerocopy_send_client": false, 00:22:42.089 "zerocopy_threshold": 0, 00:22:42.089 "tls_version": 0, 00:22:42.089 "enable_ktls": false 00:22:42.089 } 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "method": "sock_impl_set_options", 00:22:42.089 "params": { 00:22:42.089 "impl_name": "posix", 00:22:42.089 "recv_buf_size": 2097152, 00:22:42.089 "send_buf_size": 2097152, 00:22:42.089 "enable_recv_pipe": true, 00:22:42.089 "enable_quickack": false, 00:22:42.089 "enable_placement_id": 0, 00:22:42.089 "enable_zerocopy_send_server": true, 00:22:42.089 "enable_zerocopy_send_client": false, 00:22:42.089 "zerocopy_threshold": 0, 00:22:42.089 "tls_version": 0, 00:22:42.089 "enable_ktls": false 00:22:42.089 } 00:22:42.089 } 00:22:42.089 ] 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "subsystem": "vmd", 00:22:42.089 "config": [] 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "subsystem": "accel", 00:22:42.089 "config": [ 00:22:42.089 { 00:22:42.089 "method": "accel_set_options", 00:22:42.089 "params": { 00:22:42.089 "small_cache_size": 128, 00:22:42.089 "large_cache_size": 16, 00:22:42.089 "task_count": 2048, 00:22:42.089 "sequence_count": 2048, 00:22:42.089 "buf_count": 2048 00:22:42.089 } 00:22:42.089 } 00:22:42.089 ] 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "subsystem": "bdev", 00:22:42.089 "config": [ 00:22:42.089 { 00:22:42.089 "method": "bdev_set_options", 00:22:42.089 "params": { 00:22:42.089 "bdev_io_pool_size": 65535, 00:22:42.089 "bdev_io_cache_size": 256, 00:22:42.089 "bdev_auto_examine": true, 00:22:42.089 "iobuf_small_cache_size": 128, 00:22:42.089 "iobuf_large_cache_size": 16 00:22:42.089 } 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "method": "bdev_raid_set_options", 00:22:42.089 "params": { 00:22:42.089 "process_window_size_kb": 1024, 00:22:42.089 "process_max_bandwidth_mb_sec": 0 00:22:42.089 } 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "method": "bdev_iscsi_set_options", 00:22:42.089 "params": { 00:22:42.089 "timeout_sec": 30 00:22:42.089 } 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "method": "bdev_nvme_set_options", 00:22:42.089 "params": { 00:22:42.089 "action_on_timeout": "none", 00:22:42.089 "timeout_us": 0, 00:22:42.089 "timeout_admin_us": 0, 00:22:42.089 "keep_alive_timeout_ms": 10000, 00:22:42.089 "arbitration_burst": 0, 00:22:42.089 "low_priority_weight": 0, 00:22:42.089 "medium_priority_weight": 0, 00:22:42.089 "high_priority_weight": 0, 00:22:42.089 "nvme_adminq_poll_period_us": 10000, 00:22:42.089 "nvme_ioq_poll_period_us": 0, 00:22:42.089 "io_queue_requests": 0, 00:22:42.089 "delay_cmd_submit": true, 00:22:42.089 "transport_retry_count": 4, 00:22:42.089 "bdev_retry_count": 3, 00:22:42.089 "transport_ack_timeout": 0, 00:22:42.089 "ctrlr_loss_timeout_sec": 0, 00:22:42.089 "reconnect_delay_sec": 0, 00:22:42.089 "fast_io_fail_timeout_sec": 0, 00:22:42.089 "disable_auto_failback": false, 00:22:42.089 "generate_uuids": false, 00:22:42.089 "transport_tos": 0, 00:22:42.089 "nvme_error_stat": false, 00:22:42.089 "rdma_srq_size": 0, 00:22:42.089 "io_path_stat": false, 00:22:42.089 "allow_accel_sequence": false, 00:22:42.089 "rdma_max_cq_size": 0, 00:22:42.089 "rdma_cm_event_timeout_ms": 0, 00:22:42.089 "dhchap_digests": [ 00:22:42.089 "sha256", 00:22:42.089 "sha384", 00:22:42.089 "sha512" 00:22:42.089 ], 00:22:42.089 "dhchap_dhgroups": [ 00:22:42.089 "null", 00:22:42.089 "ffdhe2048", 00:22:42.089 "ffdhe3072", 00:22:42.089 "ffdhe4096", 00:22:42.089 "ffdhe6144", 00:22:42.089 "ffdhe8192" 00:22:42.089 ] 00:22:42.089 } 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "method": "bdev_nvme_set_hotplug", 00:22:42.089 "params": { 00:22:42.089 "period_us": 100000, 00:22:42.089 "enable": false 00:22:42.089 } 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "method": "bdev_malloc_create", 00:22:42.089 "params": { 00:22:42.089 "name": "malloc0", 00:22:42.089 "num_blocks": 8192, 00:22:42.089 "block_size": 4096, 00:22:42.089 "physical_block_size": 4096, 00:22:42.089 "uuid": "d08d2d35-c220-4d8b-a5c8-3975ef1fb1f1", 00:22:42.089 "optimal_io_boundary": 0, 00:22:42.089 "md_size": 0, 00:22:42.089 "dif_type": 0, 00:22:42.089 "dif_is_head_of_md": false, 00:22:42.089 "dif_pi_format": 0 00:22:42.089 } 00:22:42.089 }, 00:22:42.089 { 00:22:42.089 "method": "bdev_wait_for_examine" 00:22:42.089 } 00:22:42.089 ] 00:22:42.089 }, 00:22:42.090 { 00:22:42.090 "subsystem": "nbd", 00:22:42.090 "config": [] 00:22:42.090 }, 00:22:42.090 { 00:22:42.090 "subsystem": "scheduler", 00:22:42.090 "config": [ 00:22:42.090 { 00:22:42.090 "method": "framework_set_scheduler", 00:22:42.090 "params": { 00:22:42.090 "name": "static" 00:22:42.090 } 00:22:42.090 } 00:22:42.090 ] 00:22:42.090 }, 00:22:42.090 { 00:22:42.090 "subsystem": "nvmf", 00:22:42.090 "config": [ 00:22:42.090 { 00:22:42.090 "method": "nvmf_set_config", 00:22:42.090 "params": { 00:22:42.090 "discovery_filter": "match_any", 00:22:42.090 "admin_cmd_passthru": { 00:22:42.090 "identify_ctrlr": false 00:22:42.090 }, 00:22:42.090 "dhchap_digests": [ 00:22:42.090 "sha256", 00:22:42.090 "sha384", 00:22:42.090 "sha512" 00:22:42.090 ], 00:22:42.090 "dhchap_dhgroups": [ 00:22:42.090 "null", 00:22:42.090 "ffdhe2048", 00:22:42.090 "ffdhe3072", 00:22:42.090 "ffdhe4096", 00:22:42.090 "ffdhe6144", 00:22:42.090 "ffdhe8192" 00:22:42.090 ] 00:22:42.090 } 00:22:42.090 }, 00:22:42.090 { 00:22:42.090 "method": "nvmf_set_max_subsystems", 00:22:42.090 "params": { 00:22:42.090 "max_subsystems": 1024 00:22:42.090 } 00:22:42.090 }, 00:22:42.090 { 00:22:42.090 "method": "nvmf_set_crdt", 00:22:42.090 "params": { 00:22:42.090 "crdt1": 0, 00:22:42.090 "crdt2": 0, 00:22:42.090 "crdt3": 0 00:22:42.090 } 00:22:42.090 }, 00:22:42.090 { 00:22:42.090 "method": "nvmf_create_transport", 00:22:42.090 "params": { 00:22:42.090 "trtype": "TCP", 00:22:42.090 "max_queue_depth": 128, 00:22:42.090 "max_io_qpairs_per_ctrlr": 127, 00:22:42.090 "in_capsule_data_size": 4096, 00:22:42.090 "max_io_size": 131072, 00:22:42.090 "io_unit_size": 131072, 00:22:42.090 "max_aq_depth": 128, 00:22:42.090 "num_shared_buffers": 511, 00:22:42.090 "buf_cache_size": 4294967295, 00:22:42.090 "dif_insert_or_strip": false, 00:22:42.090 "zcopy": false, 00:22:42.090 "c2h_success": false, 00:22:42.090 "sock_priority": 0, 00:22:42.090 "abort_timeout_sec": 1, 00:22:42.090 "ack_timeout": 0, 00:22:42.090 "data_wr_pool_size": 0 00:22:42.090 } 00:22:42.090 }, 00:22:42.090 { 00:22:42.090 "method": "nvmf_create_subsystem", 00:22:42.090 "params": { 00:22:42.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.090 "allow_any_host": false, 00:22:42.090 "serial_number": "00000000000000000000", 00:22:42.090 "model_number": "SPDK bdev Controller", 00:22:42.090 "max_namespaces": 32, 00:22:42.090 "min_cntlid": 1, 00:22:42.090 "max_cntlid": 65519, 00:22:42.090 "ana_reporting": false 00:22:42.090 } 00:22:42.090 }, 00:22:42.090 { 00:22:42.090 "method": "nvmf_subsystem_add_host", 00:22:42.090 "params": { 00:22:42.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.090 "host": "nqn.2016-06.io.spdk:host1", 00:22:42.090 "psk": "key0" 00:22:42.090 } 00:22:42.090 }, 00:22:42.090 { 00:22:42.090 "method": "nvmf_subsystem_add_ns", 00:22:42.090 "params": { 00:22:42.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.090 "namespace": { 00:22:42.090 "nsid": 1, 00:22:42.090 "bdev_name": "malloc0", 00:22:42.090 "nguid": "D08D2D35C2204D8BA5C83975EF1FB1F1", 00:22:42.090 "uuid": "d08d2d35-c220-4d8b-a5c8-3975ef1fb1f1", 00:22:42.090 "no_auto_visible": false 00:22:42.090 } 00:22:42.090 } 00:22:42.090 }, 00:22:42.090 { 00:22:42.090 "method": "nvmf_subsystem_add_listener", 00:22:42.090 "params": { 00:22:42.090 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.090 "listen_address": { 00:22:42.090 "trtype": "TCP", 00:22:42.090 "adrfam": "IPv4", 00:22:42.090 "traddr": "10.0.0.2", 00:22:42.090 "trsvcid": "4420" 00:22:42.090 }, 00:22:42.090 "secure_channel": false, 00:22:42.090 "sock_impl": "ssl" 00:22:42.090 } 00:22:42.090 } 00:22:42.090 ] 00:22:42.090 } 00:22:42.090 ] 00:22:42.090 }' 00:22:42.090 06:33:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:22:42.348 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:22:42.348 "subsystems": [ 00:22:42.348 { 00:22:42.348 "subsystem": "keyring", 00:22:42.348 "config": [ 00:22:42.348 { 00:22:42.348 "method": "keyring_file_add_key", 00:22:42.348 "params": { 00:22:42.348 "name": "key0", 00:22:42.348 "path": "/tmp/tmp.jMKaJyqxjB" 00:22:42.348 } 00:22:42.348 } 00:22:42.348 ] 00:22:42.348 }, 00:22:42.348 { 00:22:42.348 "subsystem": "iobuf", 00:22:42.348 "config": [ 00:22:42.348 { 00:22:42.348 "method": "iobuf_set_options", 00:22:42.348 "params": { 00:22:42.348 "small_pool_count": 8192, 00:22:42.348 "large_pool_count": 1024, 00:22:42.348 "small_bufsize": 8192, 00:22:42.348 "large_bufsize": 135168, 00:22:42.348 "enable_numa": false 00:22:42.348 } 00:22:42.348 } 00:22:42.348 ] 00:22:42.348 }, 00:22:42.348 { 00:22:42.348 "subsystem": "sock", 00:22:42.348 "config": [ 00:22:42.348 { 00:22:42.348 "method": "sock_set_default_impl", 00:22:42.348 "params": { 00:22:42.348 "impl_name": "posix" 00:22:42.348 } 00:22:42.348 }, 00:22:42.348 { 00:22:42.348 "method": "sock_impl_set_options", 00:22:42.348 "params": { 00:22:42.348 "impl_name": "ssl", 00:22:42.348 "recv_buf_size": 4096, 00:22:42.348 "send_buf_size": 4096, 00:22:42.348 "enable_recv_pipe": true, 00:22:42.348 "enable_quickack": false, 00:22:42.348 "enable_placement_id": 0, 00:22:42.348 "enable_zerocopy_send_server": true, 00:22:42.348 "enable_zerocopy_send_client": false, 00:22:42.348 "zerocopy_threshold": 0, 00:22:42.348 "tls_version": 0, 00:22:42.348 "enable_ktls": false 00:22:42.348 } 00:22:42.348 }, 00:22:42.348 { 00:22:42.348 "method": "sock_impl_set_options", 00:22:42.348 "params": { 00:22:42.348 "impl_name": "posix", 00:22:42.348 "recv_buf_size": 2097152, 00:22:42.348 "send_buf_size": 2097152, 00:22:42.348 "enable_recv_pipe": true, 00:22:42.348 "enable_quickack": false, 00:22:42.348 "enable_placement_id": 0, 00:22:42.348 "enable_zerocopy_send_server": true, 00:22:42.348 "enable_zerocopy_send_client": false, 00:22:42.348 "zerocopy_threshold": 0, 00:22:42.348 "tls_version": 0, 00:22:42.348 "enable_ktls": false 00:22:42.348 } 00:22:42.348 } 00:22:42.348 ] 00:22:42.348 }, 00:22:42.348 { 00:22:42.348 "subsystem": "vmd", 00:22:42.348 "config": [] 00:22:42.348 }, 00:22:42.348 { 00:22:42.348 "subsystem": "accel", 00:22:42.348 "config": [ 00:22:42.348 { 00:22:42.348 "method": "accel_set_options", 00:22:42.348 "params": { 00:22:42.348 "small_cache_size": 128, 00:22:42.348 "large_cache_size": 16, 00:22:42.348 "task_count": 2048, 00:22:42.348 "sequence_count": 2048, 00:22:42.348 "buf_count": 2048 00:22:42.348 } 00:22:42.348 } 00:22:42.348 ] 00:22:42.348 }, 00:22:42.348 { 00:22:42.348 "subsystem": "bdev", 00:22:42.348 "config": [ 00:22:42.348 { 00:22:42.348 "method": "bdev_set_options", 00:22:42.348 "params": { 00:22:42.348 "bdev_io_pool_size": 65535, 00:22:42.348 "bdev_io_cache_size": 256, 00:22:42.348 "bdev_auto_examine": true, 00:22:42.348 "iobuf_small_cache_size": 128, 00:22:42.348 "iobuf_large_cache_size": 16 00:22:42.348 } 00:22:42.348 }, 00:22:42.348 { 00:22:42.348 "method": "bdev_raid_set_options", 00:22:42.348 "params": { 00:22:42.348 "process_window_size_kb": 1024, 00:22:42.348 "process_max_bandwidth_mb_sec": 0 00:22:42.348 } 00:22:42.348 }, 00:22:42.348 { 00:22:42.348 "method": "bdev_iscsi_set_options", 00:22:42.348 "params": { 00:22:42.348 "timeout_sec": 30 00:22:42.348 } 00:22:42.348 }, 00:22:42.348 { 00:22:42.348 "method": "bdev_nvme_set_options", 00:22:42.348 "params": { 00:22:42.348 "action_on_timeout": "none", 00:22:42.348 "timeout_us": 0, 00:22:42.348 "timeout_admin_us": 0, 00:22:42.348 "keep_alive_timeout_ms": 10000, 00:22:42.348 "arbitration_burst": 0, 00:22:42.348 "low_priority_weight": 0, 00:22:42.348 "medium_priority_weight": 0, 00:22:42.348 "high_priority_weight": 0, 00:22:42.348 "nvme_adminq_poll_period_us": 10000, 00:22:42.348 "nvme_ioq_poll_period_us": 0, 00:22:42.348 "io_queue_requests": 512, 00:22:42.348 "delay_cmd_submit": true, 00:22:42.348 "transport_retry_count": 4, 00:22:42.348 "bdev_retry_count": 3, 00:22:42.348 "transport_ack_timeout": 0, 00:22:42.348 "ctrlr_loss_timeout_sec": 0, 00:22:42.348 "reconnect_delay_sec": 0, 00:22:42.348 "fast_io_fail_timeout_sec": 0, 00:22:42.348 "disable_auto_failback": false, 00:22:42.348 "generate_uuids": false, 00:22:42.348 "transport_tos": 0, 00:22:42.349 "nvme_error_stat": false, 00:22:42.349 "rdma_srq_size": 0, 00:22:42.349 "io_path_stat": false, 00:22:42.349 "allow_accel_sequence": false, 00:22:42.349 "rdma_max_cq_size": 0, 00:22:42.349 "rdma_cm_event_timeout_ms": 0, 00:22:42.349 "dhchap_digests": [ 00:22:42.349 "sha256", 00:22:42.349 "sha384", 00:22:42.349 "sha512" 00:22:42.349 ], 00:22:42.349 "dhchap_dhgroups": [ 00:22:42.349 "null", 00:22:42.349 "ffdhe2048", 00:22:42.349 "ffdhe3072", 00:22:42.349 "ffdhe4096", 00:22:42.349 "ffdhe6144", 00:22:42.349 "ffdhe8192" 00:22:42.349 ] 00:22:42.349 } 00:22:42.349 }, 00:22:42.349 { 00:22:42.349 "method": "bdev_nvme_attach_controller", 00:22:42.349 "params": { 00:22:42.349 "name": "nvme0", 00:22:42.349 "trtype": "TCP", 00:22:42.349 "adrfam": "IPv4", 00:22:42.349 "traddr": "10.0.0.2", 00:22:42.349 "trsvcid": "4420", 00:22:42.349 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.349 "prchk_reftag": false, 00:22:42.349 "prchk_guard": false, 00:22:42.349 "ctrlr_loss_timeout_sec": 0, 00:22:42.349 "reconnect_delay_sec": 0, 00:22:42.349 "fast_io_fail_timeout_sec": 0, 00:22:42.349 "psk": "key0", 00:22:42.349 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:42.349 "hdgst": false, 00:22:42.349 "ddgst": false, 00:22:42.349 "multipath": "multipath" 00:22:42.349 } 00:22:42.349 }, 00:22:42.349 { 00:22:42.349 "method": "bdev_nvme_set_hotplug", 00:22:42.349 "params": { 00:22:42.349 "period_us": 100000, 00:22:42.349 "enable": false 00:22:42.349 } 00:22:42.349 }, 00:22:42.349 { 00:22:42.349 "method": "bdev_enable_histogram", 00:22:42.349 "params": { 00:22:42.349 "name": "nvme0n1", 00:22:42.349 "enable": true 00:22:42.349 } 00:22:42.349 }, 00:22:42.349 { 00:22:42.349 "method": "bdev_wait_for_examine" 00:22:42.349 } 00:22:42.349 ] 00:22:42.349 }, 00:22:42.349 { 00:22:42.349 "subsystem": "nbd", 00:22:42.349 "config": [] 00:22:42.349 } 00:22:42.349 ] 00:22:42.349 }' 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 2118644 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2118644 ']' 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2118644 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2118644 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2118644' 00:22:42.349 killing process with pid 2118644 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2118644 00:22:42.349 Received shutdown signal, test time was about 1.000000 seconds 00:22:42.349 00:22:42.349 Latency(us) 00:22:42.349 [2024-11-20T05:33:14.185Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:42.349 [2024-11-20T05:33:14.185Z] =================================================================================================================== 00:22:42.349 [2024-11-20T05:33:14.185Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:42.349 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2118644 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 2118619 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2118619 ']' 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2118619 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2118619 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2118619' 00:22:42.607 killing process with pid 2118619 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2118619 00:22:42.607 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2118619 00:22:42.865 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:22:42.865 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:42.865 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:22:42.865 "subsystems": [ 00:22:42.865 { 00:22:42.865 "subsystem": "keyring", 00:22:42.865 "config": [ 00:22:42.865 { 00:22:42.865 "method": "keyring_file_add_key", 00:22:42.865 "params": { 00:22:42.865 "name": "key0", 00:22:42.865 "path": "/tmp/tmp.jMKaJyqxjB" 00:22:42.865 } 00:22:42.865 } 00:22:42.865 ] 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "subsystem": "iobuf", 00:22:42.865 "config": [ 00:22:42.865 { 00:22:42.865 "method": "iobuf_set_options", 00:22:42.865 "params": { 00:22:42.865 "small_pool_count": 8192, 00:22:42.865 "large_pool_count": 1024, 00:22:42.865 "small_bufsize": 8192, 00:22:42.865 "large_bufsize": 135168, 00:22:42.865 "enable_numa": false 00:22:42.865 } 00:22:42.865 } 00:22:42.865 ] 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "subsystem": "sock", 00:22:42.865 "config": [ 00:22:42.865 { 00:22:42.865 "method": "sock_set_default_impl", 00:22:42.865 "params": { 00:22:42.865 "impl_name": "posix" 00:22:42.865 } 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "method": "sock_impl_set_options", 00:22:42.865 "params": { 00:22:42.865 "impl_name": "ssl", 00:22:42.865 "recv_buf_size": 4096, 00:22:42.865 "send_buf_size": 4096, 00:22:42.865 "enable_recv_pipe": true, 00:22:42.865 "enable_quickack": false, 00:22:42.865 "enable_placement_id": 0, 00:22:42.865 "enable_zerocopy_send_server": true, 00:22:42.865 "enable_zerocopy_send_client": false, 00:22:42.865 "zerocopy_threshold": 0, 00:22:42.865 "tls_version": 0, 00:22:42.865 "enable_ktls": false 00:22:42.865 } 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "method": "sock_impl_set_options", 00:22:42.865 "params": { 00:22:42.865 "impl_name": "posix", 00:22:42.865 "recv_buf_size": 2097152, 00:22:42.865 "send_buf_size": 2097152, 00:22:42.865 "enable_recv_pipe": true, 00:22:42.865 "enable_quickack": false, 00:22:42.865 "enable_placement_id": 0, 00:22:42.865 "enable_zerocopy_send_server": true, 00:22:42.865 "enable_zerocopy_send_client": false, 00:22:42.865 "zerocopy_threshold": 0, 00:22:42.865 "tls_version": 0, 00:22:42.865 "enable_ktls": false 00:22:42.865 } 00:22:42.865 } 00:22:42.865 ] 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "subsystem": "vmd", 00:22:42.865 "config": [] 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "subsystem": "accel", 00:22:42.865 "config": [ 00:22:42.865 { 00:22:42.865 "method": "accel_set_options", 00:22:42.865 "params": { 00:22:42.865 "small_cache_size": 128, 00:22:42.865 "large_cache_size": 16, 00:22:42.865 "task_count": 2048, 00:22:42.865 "sequence_count": 2048, 00:22:42.865 "buf_count": 2048 00:22:42.865 } 00:22:42.865 } 00:22:42.865 ] 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "subsystem": "bdev", 00:22:42.865 "config": [ 00:22:42.865 { 00:22:42.865 "method": "bdev_set_options", 00:22:42.865 "params": { 00:22:42.865 "bdev_io_pool_size": 65535, 00:22:42.865 "bdev_io_cache_size": 256, 00:22:42.865 "bdev_auto_examine": true, 00:22:42.865 "iobuf_small_cache_size": 128, 00:22:42.865 "iobuf_large_cache_size": 16 00:22:42.865 } 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "method": "bdev_raid_set_options", 00:22:42.865 "params": { 00:22:42.865 "process_window_size_kb": 1024, 00:22:42.865 "process_max_bandwidth_mb_sec": 0 00:22:42.865 } 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "method": "bdev_iscsi_set_options", 00:22:42.865 "params": { 00:22:42.865 "timeout_sec": 30 00:22:42.865 } 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "method": "bdev_nvme_set_options", 00:22:42.865 "params": { 00:22:42.865 "action_on_timeout": "none", 00:22:42.865 "timeout_us": 0, 00:22:42.865 "timeout_admin_us": 0, 00:22:42.865 "keep_alive_timeout_ms": 10000, 00:22:42.865 "arbitration_burst": 0, 00:22:42.865 "low_priority_weight": 0, 00:22:42.865 "medium_priority_weight": 0, 00:22:42.865 "high_priority_weight": 0, 00:22:42.865 "nvme_adminq_poll_period_us": 10000, 00:22:42.865 "nvme_ioq_poll_period_us": 0, 00:22:42.865 "io_queue_requests": 0, 00:22:42.865 "delay_cmd_submit": true, 00:22:42.865 "transport_retry_count": 4, 00:22:42.865 "bdev_retry_count": 3, 00:22:42.865 "transport_ack_timeout": 0, 00:22:42.865 "ctrlr_loss_timeout_sec": 0, 00:22:42.865 "reconnect_delay_sec": 0, 00:22:42.865 "fast_io_fail_timeout_sec": 0, 00:22:42.865 "disable_auto_failback": false, 00:22:42.865 "generate_uuids": false, 00:22:42.865 "transport_tos": 0, 00:22:42.865 "nvme_error_stat": false, 00:22:42.865 "rdma_srq_size": 0, 00:22:42.865 "io_path_stat": false, 00:22:42.865 "allow_accel_sequence": false, 00:22:42.865 "rdma_max_cq_size": 0, 00:22:42.865 "rdma_cm_event_timeout_ms": 0, 00:22:42.865 "dhchap_digests": [ 00:22:42.865 "sha256", 00:22:42.865 "sha384", 00:22:42.865 "sha512" 00:22:42.865 ], 00:22:42.865 "dhchap_dhgroups": [ 00:22:42.865 "null", 00:22:42.865 "ffdhe2048", 00:22:42.865 "ffdhe3072", 00:22:42.865 "ffdhe4096", 00:22:42.865 "ffdhe6144", 00:22:42.865 "ffdhe8192" 00:22:42.865 ] 00:22:42.865 } 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "method": "bdev_nvme_set_hotplug", 00:22:42.865 "params": { 00:22:42.865 "period_us": 100000, 00:22:42.865 "enable": false 00:22:42.865 } 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "method": "bdev_malloc_create", 00:22:42.865 "params": { 00:22:42.865 "name": "malloc0", 00:22:42.865 "num_blocks": 8192, 00:22:42.865 "block_size": 4096, 00:22:42.865 "physical_block_size": 4096, 00:22:42.865 "uuid": "d08d2d35-c220-4d8b-a5c8-3975ef1fb1f1", 00:22:42.865 "optimal_io_boundary": 0, 00:22:42.865 "md_size": 0, 00:22:42.865 "dif_type": 0, 00:22:42.865 "dif_is_head_of_md": false, 00:22:42.865 "dif_pi_format": 0 00:22:42.865 } 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "method": "bdev_wait_for_examine" 00:22:42.865 } 00:22:42.865 ] 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "subsystem": "nbd", 00:22:42.865 "config": [] 00:22:42.865 }, 00:22:42.865 { 00:22:42.865 "subsystem": "scheduler", 00:22:42.865 "config": [ 00:22:42.865 { 00:22:42.865 "method": "framework_set_scheduler", 00:22:42.865 "params": { 00:22:42.865 "name": "static" 00:22:42.865 } 00:22:42.865 } 00:22:42.865 ] 00:22:42.865 }, 00:22:42.866 { 00:22:42.866 "subsystem": "nvmf", 00:22:42.866 "config": [ 00:22:42.866 { 00:22:42.866 "method": "nvmf_set_config", 00:22:42.866 "params": { 00:22:42.866 "discovery_filter": "match_any", 00:22:42.866 "admin_cmd_passthru": { 00:22:42.866 "identify_ctrlr": false 00:22:42.866 }, 00:22:42.866 "dhchap_digests": [ 00:22:42.866 "sha256", 00:22:42.866 "sha384", 00:22:42.866 "sha512" 00:22:42.866 ], 00:22:42.866 "dhchap_dhgroups": [ 00:22:42.866 "null", 00:22:42.866 "ffdhe2048", 00:22:42.866 "ffdhe3072", 00:22:42.866 "ffdhe4096", 00:22:42.866 "ffdhe6144", 00:22:42.866 "ffdhe8192" 00:22:42.866 ] 00:22:42.866 } 00:22:42.866 }, 00:22:42.866 { 00:22:42.866 "method": "nvmf_set_max_subsystems", 00:22:42.866 "params": { 00:22:42.866 "max_subsystems": 1024 00:22:42.866 } 00:22:42.866 }, 00:22:42.866 { 00:22:42.866 "method": "nvmf_set_crdt", 00:22:42.866 "params": { 00:22:42.866 "crdt1": 0, 00:22:42.866 "crdt2": 0, 00:22:42.866 "crdt3": 0 00:22:42.866 } 00:22:42.866 }, 00:22:42.866 { 00:22:42.866 "method": "nvmf_create_transport", 00:22:42.866 "params": { 00:22:42.866 "trtype": "TCP", 00:22:42.866 "max_queue_depth": 128, 00:22:42.866 "max_io_qpairs_per_ctrlr": 127, 00:22:42.866 "in_capsule_data_size": 4096, 00:22:42.866 "max_io_size": 131072, 00:22:42.866 "io_unit_size": 131072, 00:22:42.866 "max_aq_depth": 128, 00:22:42.866 "num_shared_buffers": 511, 00:22:42.866 "buf_cache_size": 4294967295, 00:22:42.866 "dif_insert_or_strip": false, 00:22:42.866 "zcopy": false, 00:22:42.866 "c2h_success": false, 00:22:42.866 "sock_priority": 0, 00:22:42.866 "abort_timeout_sec": 1, 00:22:42.866 "ack_timeout": 0, 00:22:42.866 "data_wr_pool_size": 0 00:22:42.866 } 00:22:42.866 }, 00:22:42.866 { 00:22:42.866 "method": "nvmf_create_subsystem", 00:22:42.866 "params": { 00:22:42.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.866 "allow_any_host": false, 00:22:42.866 "serial_number": "00000000000000000000", 00:22:42.866 "model_number": "SPDK bdev Controller", 00:22:42.866 "max_namespaces": 32, 00:22:42.866 "min_cntlid": 1, 00:22:42.866 "max_cntlid": 65519, 00:22:42.866 "ana_reporting": false 00:22:42.866 } 00:22:42.866 }, 00:22:42.866 { 00:22:42.866 "method": "nvmf_subsystem_add_host", 00:22:42.866 "params": { 00:22:42.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.866 "host": "nqn.2016-06.io.spdk:host1", 00:22:42.866 "psk": "key0" 00:22:42.866 } 00:22:42.866 }, 00:22:42.866 { 00:22:42.866 "method": "nvmf_subsystem_add_ns", 00:22:42.866 "params": { 00:22:42.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.866 "namespace": { 00:22:42.866 "nsid": 1, 00:22:42.866 "bdev_name": "malloc0", 00:22:42.866 "nguid": "D08D2D35C2204D8BA5C83975EF1FB1F1", 00:22:42.866 "uuid": "d08d2d35-c220-4d8b-a5c8-3975ef1fb1f1", 00:22:42.866 "no_auto_visible": false 00:22:42.866 } 00:22:42.866 } 00:22:42.866 }, 00:22:42.866 { 00:22:42.866 "method": "nvmf_subsystem_add_listener", 00:22:42.866 "params": { 00:22:42.866 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:42.866 "listen_address": { 00:22:42.866 "trtype": "TCP", 00:22:42.866 "adrfam": "IPv4", 00:22:42.866 "traddr": "10.0.0.2", 00:22:42.866 "trsvcid": "4420" 00:22:42.866 }, 00:22:42.866 "secure_channel": false, 00:22:42.866 "sock_impl": "ssl" 00:22:42.866 } 00:22:42.866 } 00:22:42.866 ] 00:22:42.866 } 00:22:42.866 ] 00:22:42.866 }' 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=2119051 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 2119051 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2119051 ']' 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:42.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:42.866 06:33:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:42.866 [2024-11-20 06:33:14.668503] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:42.866 [2024-11-20 06:33:14.668599] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:43.124 [2024-11-20 06:33:14.740577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:43.124 [2024-11-20 06:33:14.798967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:43.124 [2024-11-20 06:33:14.799025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:43.124 [2024-11-20 06:33:14.799038] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:43.124 [2024-11-20 06:33:14.799049] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:43.124 [2024-11-20 06:33:14.799059] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:43.124 [2024-11-20 06:33:14.799718] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:43.382 [2024-11-20 06:33:15.046996] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:43.382 [2024-11-20 06:33:15.079036] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:43.382 [2024-11-20 06:33:15.079270] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=2119203 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 2119203 /var/tmp/bdevperf.sock 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # '[' -z 2119203 ']' 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:43.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:43.949 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:22:43.949 "subsystems": [ 00:22:43.949 { 00:22:43.949 "subsystem": "keyring", 00:22:43.949 "config": [ 00:22:43.949 { 00:22:43.949 "method": "keyring_file_add_key", 00:22:43.949 "params": { 00:22:43.949 "name": "key0", 00:22:43.949 "path": "/tmp/tmp.jMKaJyqxjB" 00:22:43.949 } 00:22:43.949 } 00:22:43.949 ] 00:22:43.949 }, 00:22:43.949 { 00:22:43.949 "subsystem": "iobuf", 00:22:43.949 "config": [ 00:22:43.949 { 00:22:43.949 "method": "iobuf_set_options", 00:22:43.949 "params": { 00:22:43.949 "small_pool_count": 8192, 00:22:43.949 "large_pool_count": 1024, 00:22:43.949 "small_bufsize": 8192, 00:22:43.949 "large_bufsize": 135168, 00:22:43.949 "enable_numa": false 00:22:43.949 } 00:22:43.949 } 00:22:43.949 ] 00:22:43.949 }, 00:22:43.949 { 00:22:43.949 "subsystem": "sock", 00:22:43.949 "config": [ 00:22:43.949 { 00:22:43.949 "method": "sock_set_default_impl", 00:22:43.949 "params": { 00:22:43.949 "impl_name": "posix" 00:22:43.949 } 00:22:43.949 }, 00:22:43.949 { 00:22:43.949 "method": "sock_impl_set_options", 00:22:43.949 "params": { 00:22:43.949 "impl_name": "ssl", 00:22:43.949 "recv_buf_size": 4096, 00:22:43.949 "send_buf_size": 4096, 00:22:43.949 "enable_recv_pipe": true, 00:22:43.949 "enable_quickack": false, 00:22:43.949 "enable_placement_id": 0, 00:22:43.949 "enable_zerocopy_send_server": true, 00:22:43.949 "enable_zerocopy_send_client": false, 00:22:43.949 "zerocopy_threshold": 0, 00:22:43.949 "tls_version": 0, 00:22:43.949 "enable_ktls": false 00:22:43.949 } 00:22:43.949 }, 00:22:43.949 { 00:22:43.949 "method": "sock_impl_set_options", 00:22:43.949 "params": { 00:22:43.949 "impl_name": "posix", 00:22:43.949 "recv_buf_size": 2097152, 00:22:43.949 "send_buf_size": 2097152, 00:22:43.949 "enable_recv_pipe": true, 00:22:43.949 "enable_quickack": false, 00:22:43.949 "enable_placement_id": 0, 00:22:43.949 "enable_zerocopy_send_server": true, 00:22:43.949 "enable_zerocopy_send_client": false, 00:22:43.949 "zerocopy_threshold": 0, 00:22:43.949 "tls_version": 0, 00:22:43.949 "enable_ktls": false 00:22:43.949 } 00:22:43.949 } 00:22:43.949 ] 00:22:43.949 }, 00:22:43.949 { 00:22:43.949 "subsystem": "vmd", 00:22:43.949 "config": [] 00:22:43.949 }, 00:22:43.949 { 00:22:43.949 "subsystem": "accel", 00:22:43.949 "config": [ 00:22:43.949 { 00:22:43.949 "method": "accel_set_options", 00:22:43.949 "params": { 00:22:43.949 "small_cache_size": 128, 00:22:43.949 "large_cache_size": 16, 00:22:43.949 "task_count": 2048, 00:22:43.949 "sequence_count": 2048, 00:22:43.949 "buf_count": 2048 00:22:43.949 } 00:22:43.949 } 00:22:43.949 ] 00:22:43.949 }, 00:22:43.949 { 00:22:43.949 "subsystem": "bdev", 00:22:43.949 "config": [ 00:22:43.949 { 00:22:43.949 "method": "bdev_set_options", 00:22:43.949 "params": { 00:22:43.949 "bdev_io_pool_size": 65535, 00:22:43.949 "bdev_io_cache_size": 256, 00:22:43.949 "bdev_auto_examine": true, 00:22:43.949 "iobuf_small_cache_size": 128, 00:22:43.949 "iobuf_large_cache_size": 16 00:22:43.949 } 00:22:43.949 }, 00:22:43.949 { 00:22:43.949 "method": "bdev_raid_set_options", 00:22:43.949 "params": { 00:22:43.949 "process_window_size_kb": 1024, 00:22:43.949 "process_max_bandwidth_mb_sec": 0 00:22:43.949 } 00:22:43.949 }, 00:22:43.949 { 00:22:43.949 "method": "bdev_iscsi_set_options", 00:22:43.949 "params": { 00:22:43.949 "timeout_sec": 30 00:22:43.949 } 00:22:43.949 }, 00:22:43.949 { 00:22:43.949 "method": "bdev_nvme_set_options", 00:22:43.950 "params": { 00:22:43.950 "action_on_timeout": "none", 00:22:43.950 "timeout_us": 0, 00:22:43.950 "timeout_admin_us": 0, 00:22:43.950 "keep_alive_timeout_ms": 10000, 00:22:43.950 "arbitration_burst": 0, 00:22:43.950 "low_priority_weight": 0, 00:22:43.950 "medium_priority_weight": 0, 00:22:43.950 "high_priority_weight": 0, 00:22:43.950 "nvme_adminq_poll_period_us": 10000, 00:22:43.950 "nvme_ioq_poll_period_us": 0, 00:22:43.950 "io_queue_requests": 512, 00:22:43.950 "delay_cmd_submit": true, 00:22:43.950 "transport_retry_count": 4, 00:22:43.950 "bdev_retry_count": 3, 00:22:43.950 "transport_ack_timeout": 0, 00:22:43.950 "ctrlr_loss_timeout_sec": 0, 00:22:43.950 "reconnect_delay_sec": 0, 00:22:43.950 "fast_io_fail_timeout_sec": 0, 00:22:43.950 "disable_auto_failback": false, 00:22:43.950 "generate_uuids": false, 00:22:43.950 "transport_tos": 0, 00:22:43.950 "nvme_error_stat": false, 00:22:43.950 "rdma_srq_size": 0, 00:22:43.950 "io_path_stat": false, 00:22:43.950 "allow_accel_sequence": false, 00:22:43.950 "rdma_max_cq_size": 0, 00:22:43.950 "rdma_cm_event_timeout_ms": 0, 00:22:43.950 "dhchap_digests": [ 00:22:43.950 "sha256", 00:22:43.950 "sha384", 00:22:43.950 "sha512" 00:22:43.950 ], 00:22:43.950 "dhchap_dhgroups": [ 00:22:43.950 "null", 00:22:43.950 "ffdhe2048", 00:22:43.950 "ffdhe3072", 00:22:43.950 "ffdhe4096", 00:22:43.950 "ffdhe6144", 00:22:43.950 "ffdhe8192" 00:22:43.950 ] 00:22:43.950 } 00:22:43.950 }, 00:22:43.950 { 00:22:43.950 "method": "bdev_nvme_attach_controller", 00:22:43.950 "params": { 00:22:43.950 "name": "nvme0", 00:22:43.950 "trtype": "TCP", 00:22:43.950 "adrfam": "IPv4", 00:22:43.950 "traddr": "10.0.0.2", 00:22:43.950 "trsvcid": "4420", 00:22:43.950 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:43.950 "prchk_reftag": false, 00:22:43.950 "prchk_guard": false, 00:22:43.950 "ctrlr_loss_timeout_sec": 0, 00:22:43.950 "reconnect_delay_sec": 0, 00:22:43.950 "fast_io_fail_timeout_sec": 0, 00:22:43.950 "psk": "key0", 00:22:43.950 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:43.950 "hdgst": false, 00:22:43.950 "ddgst": false, 00:22:43.950 "multipath": "multipath" 00:22:43.950 } 00:22:43.950 }, 00:22:43.950 { 00:22:43.950 "method": "bdev_nvme_set_hotplug", 00:22:43.950 "params": { 00:22:43.950 "period_us": 100000, 00:22:43.950 "enable": false 00:22:43.950 } 00:22:43.950 }, 00:22:43.950 { 00:22:43.950 "method": "bdev_enable_histogram", 00:22:43.950 "params": { 00:22:43.950 "name": "nvme0n1", 00:22:43.950 "enable": true 00:22:43.950 } 00:22:43.950 }, 00:22:43.950 { 00:22:43.950 "method": "bdev_wait_for_examine" 00:22:43.950 } 00:22:43.950 ] 00:22:43.950 }, 00:22:43.950 { 00:22:43.950 "subsystem": "nbd", 00:22:43.950 "config": [] 00:22:43.950 } 00:22:43.950 ] 00:22:43.950 }' 00:22:43.950 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:43.950 06:33:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:44.208 [2024-11-20 06:33:15.795061] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:44.208 [2024-11-20 06:33:15.795159] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2119203 ] 00:22:44.208 [2024-11-20 06:33:15.862437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:44.208 [2024-11-20 06:33:15.920888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:44.466 [2024-11-20 06:33:16.108086] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:45.091 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:45.092 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@866 -- # return 0 00:22:45.092 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:45.092 06:33:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:22:45.349 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:45.349 06:33:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:45.607 Running I/O for 1 seconds... 00:22:46.540 3502.00 IOPS, 13.68 MiB/s 00:22:46.540 Latency(us) 00:22:46.540 [2024-11-20T05:33:18.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.540 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:22:46.540 Verification LBA range: start 0x0 length 0x2000 00:22:46.540 nvme0n1 : 1.02 3551.69 13.87 0.00 0.00 35679.12 8641.04 48545.19 00:22:46.540 [2024-11-20T05:33:18.376Z] =================================================================================================================== 00:22:46.540 [2024-11-20T05:33:18.376Z] Total : 3551.69 13.87 0.00 0.00 35679.12 8641.04 48545.19 00:22:46.540 { 00:22:46.540 "results": [ 00:22:46.540 { 00:22:46.540 "job": "nvme0n1", 00:22:46.540 "core_mask": "0x2", 00:22:46.540 "workload": "verify", 00:22:46.540 "status": "finished", 00:22:46.540 "verify_range": { 00:22:46.540 "start": 0, 00:22:46.540 "length": 8192 00:22:46.540 }, 00:22:46.540 "queue_depth": 128, 00:22:46.540 "io_size": 4096, 00:22:46.540 "runtime": 1.022329, 00:22:46.540 "iops": 3551.6942197668263, 00:22:46.540 "mibps": 13.873805545964165, 00:22:46.540 "io_failed": 0, 00:22:46.540 "io_timeout": 0, 00:22:46.540 "avg_latency_us": 35679.12012321878, 00:22:46.540 "min_latency_us": 8641.042962962963, 00:22:46.540 "max_latency_us": 48545.18518518518 00:22:46.540 } 00:22:46.540 ], 00:22:46.540 "core_count": 1 00:22:46.540 } 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # type=--id 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@811 -- # id=0 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@822 -- # for n in $shm_files 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:46.540 nvmf_trace.0 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # return 0 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2119203 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2119203 ']' 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2119203 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:46.540 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2119203 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2119203' 00:22:46.799 killing process with pid 2119203 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2119203 00:22:46.799 Received shutdown signal, test time was about 1.000000 seconds 00:22:46.799 00:22:46.799 Latency(us) 00:22:46.799 [2024-11-20T05:33:18.635Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:46.799 [2024-11-20T05:33:18.635Z] =================================================================================================================== 00:22:46.799 [2024-11-20T05:33:18.635Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2119203 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:46.799 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:46.799 rmmod nvme_tcp 00:22:46.799 rmmod nvme_fabrics 00:22:47.057 rmmod nvme_keyring 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 2119051 ']' 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 2119051 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # '[' -z 2119051 ']' 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # kill -0 2119051 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # uname 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2119051 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2119051' 00:22:47.057 killing process with pid 2119051 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@971 -- # kill 2119051 00:22:47.057 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@976 -- # wait 2119051 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:47.316 06:33:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.220 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:49.220 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.NLUH5B8Prh /tmp/tmp.I17JezJCP7 /tmp/tmp.jMKaJyqxjB 00:22:49.220 00:22:49.220 real 1m23.784s 00:22:49.220 user 2m18.172s 00:22:49.220 sys 0m26.138s 00:22:49.220 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1128 -- # xtrace_disable 00:22:49.220 06:33:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:49.220 ************************************ 00:22:49.220 END TEST nvmf_tls 00:22:49.220 ************************************ 00:22:49.220 06:33:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:49.220 06:33:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:22:49.220 06:33:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:22:49.220 06:33:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:49.220 ************************************ 00:22:49.220 START TEST nvmf_fips 00:22:49.220 ************************************ 00:22:49.220 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:49.479 * Looking for test storage... 00:22:49.479 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lcov --version 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:22:49.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.479 --rc genhtml_branch_coverage=1 00:22:49.479 --rc genhtml_function_coverage=1 00:22:49.479 --rc genhtml_legend=1 00:22:49.479 --rc geninfo_all_blocks=1 00:22:49.479 --rc geninfo_unexecuted_blocks=1 00:22:49.479 00:22:49.479 ' 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:22:49.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.479 --rc genhtml_branch_coverage=1 00:22:49.479 --rc genhtml_function_coverage=1 00:22:49.479 --rc genhtml_legend=1 00:22:49.479 --rc geninfo_all_blocks=1 00:22:49.479 --rc geninfo_unexecuted_blocks=1 00:22:49.479 00:22:49.479 ' 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:22:49.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.479 --rc genhtml_branch_coverage=1 00:22:49.479 --rc genhtml_function_coverage=1 00:22:49.479 --rc genhtml_legend=1 00:22:49.479 --rc geninfo_all_blocks=1 00:22:49.479 --rc geninfo_unexecuted_blocks=1 00:22:49.479 00:22:49.479 ' 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:22:49.479 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:49.479 --rc genhtml_branch_coverage=1 00:22:49.479 --rc genhtml_function_coverage=1 00:22:49.479 --rc genhtml_legend=1 00:22:49.479 --rc geninfo_all_blocks=1 00:22:49.479 --rc geninfo_unexecuted_blocks=1 00:22:49.479 00:22:49.479 ' 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:49.479 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:49.480 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:49.480 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:22:49.481 Error setting digest 00:22:49.481 408247972E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:49.481 408247972E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:22:49.481 06:33:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:22:52.014 Found 0000:09:00.0 (0x8086 - 0x159b) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:22:52.014 Found 0000:09:00.1 (0x8086 - 0x159b) 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:52.014 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:22:52.015 Found net devices under 0000:09:00.0: cvl_0_0 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:22:52.015 Found net devices under 0000:09:00.1: cvl_0_1 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:52.015 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.015 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:22:52.015 00:22:52.015 --- 10.0.0.2 ping statistics --- 00:22:52.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.015 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:52.015 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.015 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:22:52.015 00:22:52.015 --- 10.0.0.1 ping statistics --- 00:22:52.015 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.015 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=2121576 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 2121576 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2121576 ']' 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:52.015 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.015 [2024-11-20 06:33:23.726616] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:52.015 [2024-11-20 06:33:23.726732] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.015 [2024-11-20 06:33:23.796758] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.274 [2024-11-20 06:33:23.852155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.274 [2024-11-20 06:33:23.852219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.274 [2024-11-20 06:33:23.852244] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.274 [2024-11-20 06:33:23.852255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.274 [2024-11-20 06:33:23.852265] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.274 [2024-11-20 06:33:23.852922] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.9ng 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.9ng 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.9ng 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.9ng 00:22:52.274 06:33:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:52.533 [2024-11-20 06:33:24.259618] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.533 [2024-11-20 06:33:24.275592] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:52.533 [2024-11-20 06:33:24.275844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.533 malloc0 00:22:52.533 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:52.533 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2121730 00:22:52.533 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:52.533 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2121730 /var/tmp/bdevperf.sock 00:22:52.533 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # '[' -z 2121730 ']' 00:22:52.533 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:52.533 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # local max_retries=100 00:22:52.533 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:52.533 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:52.533 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # xtrace_disable 00:22:52.533 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:52.791 [2024-11-20 06:33:24.413035] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:22:52.791 [2024-11-20 06:33:24.413139] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2121730 ] 00:22:52.791 [2024-11-20 06:33:24.479387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:52.791 [2024-11-20 06:33:24.537975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.049 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:22:53.049 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@866 -- # return 0 00:22:53.049 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.9ng 00:22:53.308 06:33:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:53.566 [2024-11-20 06:33:25.283989] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:53.566 TLSTESTn1 00:22:53.566 06:33:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:53.823 Running I/O for 10 seconds... 00:22:55.689 3319.00 IOPS, 12.96 MiB/s [2024-11-20T05:33:28.897Z] 3408.00 IOPS, 13.31 MiB/s [2024-11-20T05:33:29.830Z] 3431.33 IOPS, 13.40 MiB/s [2024-11-20T05:33:30.762Z] 3446.00 IOPS, 13.46 MiB/s [2024-11-20T05:33:31.694Z] 3450.80 IOPS, 13.48 MiB/s [2024-11-20T05:33:32.625Z] 3440.67 IOPS, 13.44 MiB/s [2024-11-20T05:33:33.558Z] 3437.29 IOPS, 13.43 MiB/s [2024-11-20T05:33:34.930Z] 3449.38 IOPS, 13.47 MiB/s [2024-11-20T05:33:35.866Z] 3454.67 IOPS, 13.49 MiB/s [2024-11-20T05:33:35.866Z] 3460.20 IOPS, 13.52 MiB/s 00:23:04.030 Latency(us) 00:23:04.030 [2024-11-20T05:33:35.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.030 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:04.030 Verification LBA range: start 0x0 length 0x2000 00:23:04.030 TLSTESTn1 : 10.02 3465.18 13.54 0.00 0.00 36872.80 6796.33 37671.06 00:23:04.030 [2024-11-20T05:33:35.866Z] =================================================================================================================== 00:23:04.030 [2024-11-20T05:33:35.866Z] Total : 3465.18 13.54 0.00 0.00 36872.80 6796.33 37671.06 00:23:04.030 { 00:23:04.030 "results": [ 00:23:04.030 { 00:23:04.030 "job": "TLSTESTn1", 00:23:04.030 "core_mask": "0x4", 00:23:04.030 "workload": "verify", 00:23:04.030 "status": "finished", 00:23:04.030 "verify_range": { 00:23:04.030 "start": 0, 00:23:04.030 "length": 8192 00:23:04.030 }, 00:23:04.030 "queue_depth": 128, 00:23:04.030 "io_size": 4096, 00:23:04.030 "runtime": 10.022278, 00:23:04.030 "iops": 3465.1802713913944, 00:23:04.030 "mibps": 13.535860435122634, 00:23:04.030 "io_failed": 0, 00:23:04.030 "io_timeout": 0, 00:23:04.030 "avg_latency_us": 36872.79601042143, 00:23:04.030 "min_latency_us": 6796.325925925926, 00:23:04.030 "max_latency_us": 37671.0637037037 00:23:04.030 } 00:23:04.030 ], 00:23:04.030 "core_count": 1 00:23:04.030 } 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # type=--id 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@811 -- # id=0 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@822 -- # for n in $shm_files 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:04.030 nvmf_trace.0 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # return 0 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2121730 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2121730 ']' 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2121730 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2121730 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2121730' 00:23:04.030 killing process with pid 2121730 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2121730 00:23:04.030 Received shutdown signal, test time was about 10.000000 seconds 00:23:04.030 00:23:04.030 Latency(us) 00:23:04.030 [2024-11-20T05:33:35.866Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:04.030 [2024-11-20T05:33:35.866Z] =================================================================================================================== 00:23:04.030 [2024-11-20T05:33:35.866Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:04.030 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2121730 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:04.288 rmmod nvme_tcp 00:23:04.288 rmmod nvme_fabrics 00:23:04.288 rmmod nvme_keyring 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 2121576 ']' 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 2121576 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # '[' -z 2121576 ']' 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # kill -0 2121576 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # uname 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2121576 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2121576' 00:23:04.288 killing process with pid 2121576 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@971 -- # kill 2121576 00:23:04.288 06:33:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@976 -- # wait 2121576 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:04.547 06:33:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.454 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:06.454 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.9ng 00:23:06.454 00:23:06.454 real 0m17.229s 00:23:06.454 user 0m23.005s 00:23:06.454 sys 0m5.417s 00:23:06.454 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:06.454 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:06.454 ************************************ 00:23:06.454 END TEST nvmf_fips 00:23:06.454 ************************************ 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:06.713 ************************************ 00:23:06.713 START TEST nvmf_control_msg_list 00:23:06.713 ************************************ 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:23:06.713 * Looking for test storage... 00:23:06.713 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lcov --version 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:06.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.713 --rc genhtml_branch_coverage=1 00:23:06.713 --rc genhtml_function_coverage=1 00:23:06.713 --rc genhtml_legend=1 00:23:06.713 --rc geninfo_all_blocks=1 00:23:06.713 --rc geninfo_unexecuted_blocks=1 00:23:06.713 00:23:06.713 ' 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:06.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.713 --rc genhtml_branch_coverage=1 00:23:06.713 --rc genhtml_function_coverage=1 00:23:06.713 --rc genhtml_legend=1 00:23:06.713 --rc geninfo_all_blocks=1 00:23:06.713 --rc geninfo_unexecuted_blocks=1 00:23:06.713 00:23:06.713 ' 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:06.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.713 --rc genhtml_branch_coverage=1 00:23:06.713 --rc genhtml_function_coverage=1 00:23:06.713 --rc genhtml_legend=1 00:23:06.713 --rc geninfo_all_blocks=1 00:23:06.713 --rc geninfo_unexecuted_blocks=1 00:23:06.713 00:23:06.713 ' 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:06.713 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.713 --rc genhtml_branch_coverage=1 00:23:06.713 --rc genhtml_function_coverage=1 00:23:06.713 --rc genhtml_legend=1 00:23:06.713 --rc geninfo_all_blocks=1 00:23:06.713 --rc geninfo_unexecuted_blocks=1 00:23:06.713 00:23:06.713 ' 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:06.713 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:06.714 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:23:06.714 06:33:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:23:09.245 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:09.246 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:09.246 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:09.246 Found net devices under 0000:09:00.0: cvl_0_0 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:09.246 Found net devices under 0000:09:00.1: cvl_0_1 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:09.246 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:09.246 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.309 ms 00:23:09.246 00:23:09.246 --- 10.0.0.2 ping statistics --- 00:23:09.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.246 rtt min/avg/max/mdev = 0.309/0.309/0.309/0.000 ms 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:09.246 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:09.246 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:23:09.246 00:23:09.246 --- 10.0.0.1 ping statistics --- 00:23:09.246 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:09.246 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:09.246 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=2124991 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 2124991 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@833 -- # '[' -z 2124991 ']' 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:09.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.247 [2024-11-20 06:33:40.696550] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:23:09.247 [2024-11-20 06:33:40.696643] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:09.247 [2024-11-20 06:33:40.768119] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.247 [2024-11-20 06:33:40.826194] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:09.247 [2024-11-20 06:33:40.826251] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:09.247 [2024-11-20 06:33:40.826264] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:09.247 [2024-11-20 06:33:40.826275] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:09.247 [2024-11-20 06:33:40.826285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:09.247 [2024-11-20 06:33:40.826918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@866 -- # return 0 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.247 [2024-11-20 06:33:40.982309] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.247 06:33:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.247 Malloc0 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:09.247 [2024-11-20 06:33:41.022629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2125011 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2125012 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2125013 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2125011 00:23:09.247 06:33:41 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:09.505 [2024-11-20 06:33:41.081142] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:09.505 [2024-11-20 06:33:41.091778] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:09.505 [2024-11-20 06:33:41.092071] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:10.439 Initializing NVMe Controllers 00:23:10.439 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:10.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:23:10.439 Initialization complete. Launching workers. 00:23:10.439 ======================================================== 00:23:10.439 Latency(us) 00:23:10.439 Device Information : IOPS MiB/s Average min max 00:23:10.439 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40875.55 40271.50 40973.68 00:23:10.439 ======================================================== 00:23:10.439 Total : 25.00 0.10 40875.55 40271.50 40973.68 00:23:10.439 00:23:10.439 Initializing NVMe Controllers 00:23:10.439 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:10.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:23:10.439 Initialization complete. Launching workers. 00:23:10.439 ======================================================== 00:23:10.439 Latency(us) 00:23:10.439 Device Information : IOPS MiB/s Average min max 00:23:10.439 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 5863.96 22.91 170.12 158.18 294.50 00:23:10.439 ======================================================== 00:23:10.439 Total : 5863.96 22.91 170.12 158.18 294.50 00:23:10.439 00:23:10.439 Initializing NVMe Controllers 00:23:10.439 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:10.439 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:23:10.439 Initialization complete. Launching workers. 00:23:10.439 ======================================================== 00:23:10.439 Latency(us) 00:23:10.439 Device Information : IOPS MiB/s Average min max 00:23:10.439 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40900.60 40839.39 40966.43 00:23:10.439 ======================================================== 00:23:10.439 Total : 25.00 0.10 40900.60 40839.39 40966.43 00:23:10.439 00:23:10.439 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2125012 00:23:10.439 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2125013 00:23:10.439 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:10.439 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:23:10.439 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:10.439 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:23:10.439 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:10.439 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:23:10.439 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:10.439 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:10.439 rmmod nvme_tcp 00:23:10.698 rmmod nvme_fabrics 00:23:10.698 rmmod nvme_keyring 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 2124991 ']' 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 2124991 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@952 -- # '[' -z 2124991 ']' 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@956 -- # kill -0 2124991 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # uname 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2124991 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2124991' 00:23:10.698 killing process with pid 2124991 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@971 -- # kill 2124991 00:23:10.698 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@976 -- # wait 2124991 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:10.958 06:33:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:12.862 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:12.862 00:23:12.862 real 0m6.323s 00:23:12.862 user 0m5.442s 00:23:12.862 sys 0m2.669s 00:23:12.862 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:12.862 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:23:12.862 ************************************ 00:23:12.862 END TEST nvmf_control_msg_list 00:23:12.862 ************************************ 00:23:12.862 06:33:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:12.862 06:33:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:12.862 06:33:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:12.862 06:33:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:13.121 ************************************ 00:23:13.121 START TEST nvmf_wait_for_buf 00:23:13.121 ************************************ 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:23:13.121 * Looking for test storage... 00:23:13.121 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lcov --version 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:13.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.121 --rc genhtml_branch_coverage=1 00:23:13.121 --rc genhtml_function_coverage=1 00:23:13.121 --rc genhtml_legend=1 00:23:13.121 --rc geninfo_all_blocks=1 00:23:13.121 --rc geninfo_unexecuted_blocks=1 00:23:13.121 00:23:13.121 ' 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:13.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.121 --rc genhtml_branch_coverage=1 00:23:13.121 --rc genhtml_function_coverage=1 00:23:13.121 --rc genhtml_legend=1 00:23:13.121 --rc geninfo_all_blocks=1 00:23:13.121 --rc geninfo_unexecuted_blocks=1 00:23:13.121 00:23:13.121 ' 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:13.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.121 --rc genhtml_branch_coverage=1 00:23:13.121 --rc genhtml_function_coverage=1 00:23:13.121 --rc genhtml_legend=1 00:23:13.121 --rc geninfo_all_blocks=1 00:23:13.121 --rc geninfo_unexecuted_blocks=1 00:23:13.121 00:23:13.121 ' 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:13.121 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:13.121 --rc genhtml_branch_coverage=1 00:23:13.121 --rc genhtml_function_coverage=1 00:23:13.121 --rc genhtml_legend=1 00:23:13.121 --rc geninfo_all_blocks=1 00:23:13.121 --rc geninfo_unexecuted_blocks=1 00:23:13.121 00:23:13.121 ' 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:13.121 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:13.122 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:23:13.122 06:33:44 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:15.653 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:15.653 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.653 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:15.654 Found net devices under 0000:09:00.0: cvl_0_0 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:15.654 Found net devices under 0000:09:00.1: cvl_0_1 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.654 06:33:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:15.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.340 ms 00:23:15.654 00:23:15.654 --- 10.0.0.2 ping statistics --- 00:23:15.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.654 rtt min/avg/max/mdev = 0.340/0.340/0.340/0.000 ms 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:23:15.654 00:23:15.654 --- 10.0.0.1 ping statistics --- 00:23:15.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.654 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=2127205 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 2127205 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@833 -- # '[' -z 2127205 ']' 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.654 [2024-11-20 06:33:47.127576] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:23:15.654 [2024-11-20 06:33:47.127661] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.654 [2024-11-20 06:33:47.196816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.654 [2024-11-20 06:33:47.253041] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.654 [2024-11-20 06:33:47.253096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.654 [2024-11-20 06:33:47.253118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.654 [2024-11-20 06:33:47.253127] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.654 [2024-11-20 06:33:47.253137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.654 [2024-11-20 06:33:47.253770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@866 -- # return 0 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:23:15.654 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.655 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.655 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.655 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:23:15.655 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.655 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.913 Malloc0 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.913 [2024-11-20 06:33:47.502671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:15.913 [2024-11-20 06:33:47.526868] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.913 06:33:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:23:15.913 [2024-11-20 06:33:47.602393] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:23:17.286 Initializing NVMe Controllers 00:23:17.287 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:23:17.287 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:23:17.287 Initialization complete. Launching workers. 00:23:17.287 ======================================================== 00:23:17.287 Latency(us) 00:23:17.287 Device Information : IOPS MiB/s Average min max 00:23:17.287 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 128.55 16.07 32230.60 7996.47 63850.66 00:23:17.287 ======================================================== 00:23:17.287 Total : 128.55 16.07 32230.60 7996.47 63850.66 00:23:17.287 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2038 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2038 -eq 0 ]] 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:17.287 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:17.287 rmmod nvme_tcp 00:23:17.287 rmmod nvme_fabrics 00:23:17.546 rmmod nvme_keyring 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 2127205 ']' 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 2127205 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@952 -- # '[' -z 2127205 ']' 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@956 -- # kill -0 2127205 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # uname 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2127205 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2127205' 00:23:17.546 killing process with pid 2127205 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@971 -- # kill 2127205 00:23:17.546 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@976 -- # wait 2127205 00:23:17.807 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:17.807 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:17.807 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:17.807 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:23:17.808 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:23:17.808 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:17.808 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:23:17.808 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:17.808 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:17.808 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:17.808 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:17.808 06:33:49 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:19.714 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:19.714 00:23:19.714 real 0m6.752s 00:23:19.714 user 0m3.189s 00:23:19.714 sys 0m2.024s 00:23:19.714 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:23:19.714 06:33:51 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:23:19.714 ************************************ 00:23:19.714 END TEST nvmf_wait_for_buf 00:23:19.714 ************************************ 00:23:19.714 06:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:23:19.714 06:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:23:19.714 06:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:23:19.714 06:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:23:19.714 06:33:51 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:23:19.714 06:33:51 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:22.292 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:22.292 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:22.292 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:22.293 Found net devices under 0000:09:00.0: cvl_0_0 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:22.293 Found net devices under 0000:09:00.1: cvl_0_1 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:22.293 ************************************ 00:23:22.293 START TEST nvmf_perf_adq 00:23:22.293 ************************************ 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:23:22.293 * Looking for test storage... 00:23:22.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lcov --version 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:23:22.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.293 --rc genhtml_branch_coverage=1 00:23:22.293 --rc genhtml_function_coverage=1 00:23:22.293 --rc genhtml_legend=1 00:23:22.293 --rc geninfo_all_blocks=1 00:23:22.293 --rc geninfo_unexecuted_blocks=1 00:23:22.293 00:23:22.293 ' 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:23:22.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.293 --rc genhtml_branch_coverage=1 00:23:22.293 --rc genhtml_function_coverage=1 00:23:22.293 --rc genhtml_legend=1 00:23:22.293 --rc geninfo_all_blocks=1 00:23:22.293 --rc geninfo_unexecuted_blocks=1 00:23:22.293 00:23:22.293 ' 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:23:22.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.293 --rc genhtml_branch_coverage=1 00:23:22.293 --rc genhtml_function_coverage=1 00:23:22.293 --rc genhtml_legend=1 00:23:22.293 --rc geninfo_all_blocks=1 00:23:22.293 --rc geninfo_unexecuted_blocks=1 00:23:22.293 00:23:22.293 ' 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:23:22.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:22.293 --rc genhtml_branch_coverage=1 00:23:22.293 --rc genhtml_function_coverage=1 00:23:22.293 --rc genhtml_legend=1 00:23:22.293 --rc geninfo_all_blocks=1 00:23:22.293 --rc geninfo_unexecuted_blocks=1 00:23:22.293 00:23:22.293 ' 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:22.293 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:23:22.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:22.294 06:33:53 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:24.198 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:24.198 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:24.198 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:24.199 Found net devices under 0000:09:00.0: cvl_0_0 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:24.199 Found net devices under 0000:09:00.1: cvl_0_1 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:24.199 06:33:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:25.136 06:33:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:27.036 06:33:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:32.305 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:32.306 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:32.306 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:32.306 Found net devices under 0000:09:00.0: cvl_0_0 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:32.306 Found net devices under 0000:09:00.1: cvl_0_1 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:32.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:32.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:23:32.306 00:23:32.306 --- 10.0.0.2 ping statistics --- 00:23:32.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.306 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:32.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:32.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.152 ms 00:23:32.306 00:23:32.306 --- 10.0.0.1 ping statistics --- 00:23:32.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:32.306 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2131931 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2131931 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2131931 ']' 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:32.306 06:34:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.306 [2024-11-20 06:34:03.800668] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:23:32.306 [2024-11-20 06:34:03.800754] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.306 [2024-11-20 06:34:03.870917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:32.306 [2024-11-20 06:34:03.931921] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.306 [2024-11-20 06:34:03.931969] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.306 [2024-11-20 06:34:03.931997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.307 [2024-11-20 06:34:03.932008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.307 [2024-11-20 06:34:03.932017] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.307 [2024-11-20 06:34:03.933652] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.307 [2024-11-20 06:34:03.933699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:32.307 [2024-11-20 06:34:03.933758] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:32.307 [2024-11-20 06:34:03.933762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.307 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.565 [2024-11-20 06:34:04.199808] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.565 Malloc1 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:32.565 [2024-11-20 06:34:04.269441] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2132072 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:32.565 06:34:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:34.464 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:34.464 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.464 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:34.464 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.464 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:34.464 "tick_rate": 2700000000, 00:23:34.464 "poll_groups": [ 00:23:34.464 { 00:23:34.464 "name": "nvmf_tgt_poll_group_000", 00:23:34.464 "admin_qpairs": 1, 00:23:34.464 "io_qpairs": 1, 00:23:34.464 "current_admin_qpairs": 1, 00:23:34.464 "current_io_qpairs": 1, 00:23:34.464 "pending_bdev_io": 0, 00:23:34.464 "completed_nvme_io": 19675, 00:23:34.464 "transports": [ 00:23:34.464 { 00:23:34.464 "trtype": "TCP" 00:23:34.464 } 00:23:34.464 ] 00:23:34.464 }, 00:23:34.464 { 00:23:34.464 "name": "nvmf_tgt_poll_group_001", 00:23:34.464 "admin_qpairs": 0, 00:23:34.464 "io_qpairs": 1, 00:23:34.464 "current_admin_qpairs": 0, 00:23:34.464 "current_io_qpairs": 1, 00:23:34.464 "pending_bdev_io": 0, 00:23:34.464 "completed_nvme_io": 19208, 00:23:34.464 "transports": [ 00:23:34.464 { 00:23:34.464 "trtype": "TCP" 00:23:34.464 } 00:23:34.464 ] 00:23:34.464 }, 00:23:34.464 { 00:23:34.464 "name": "nvmf_tgt_poll_group_002", 00:23:34.464 "admin_qpairs": 0, 00:23:34.464 "io_qpairs": 1, 00:23:34.464 "current_admin_qpairs": 0, 00:23:34.464 "current_io_qpairs": 1, 00:23:34.464 "pending_bdev_io": 0, 00:23:34.464 "completed_nvme_io": 20313, 00:23:34.464 "transports": [ 00:23:34.464 { 00:23:34.464 "trtype": "TCP" 00:23:34.464 } 00:23:34.464 ] 00:23:34.464 }, 00:23:34.464 { 00:23:34.464 "name": "nvmf_tgt_poll_group_003", 00:23:34.464 "admin_qpairs": 0, 00:23:34.464 "io_qpairs": 1, 00:23:34.464 "current_admin_qpairs": 0, 00:23:34.464 "current_io_qpairs": 1, 00:23:34.464 "pending_bdev_io": 0, 00:23:34.464 "completed_nvme_io": 19523, 00:23:34.464 "transports": [ 00:23:34.464 { 00:23:34.464 "trtype": "TCP" 00:23:34.464 } 00:23:34.464 ] 00:23:34.464 } 00:23:34.464 ] 00:23:34.464 }' 00:23:34.464 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:34.464 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:34.722 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:34.722 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:34.722 06:34:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2132072 00:23:42.829 Initializing NVMe Controllers 00:23:42.829 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:42.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:42.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:42.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:42.829 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:42.829 Initialization complete. Launching workers. 00:23:42.829 ======================================================== 00:23:42.829 Latency(us) 00:23:42.829 Device Information : IOPS MiB/s Average min max 00:23:42.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10308.50 40.27 6208.54 2640.39 10800.16 00:23:42.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10149.60 39.65 6307.82 2611.30 10867.24 00:23:42.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10695.30 41.78 5985.42 2027.08 10160.76 00:23:42.829 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10341.50 40.40 6191.07 2594.40 10563.46 00:23:42.829 ======================================================== 00:23:42.829 Total : 41494.90 162.09 6170.96 2027.08 10867.24 00:23:42.829 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:23:42.829 rmmod nvme_tcp 00:23:42.829 rmmod nvme_fabrics 00:23:42.829 rmmod nvme_keyring 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2131931 ']' 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2131931 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2131931 ']' 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2131931 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2131931 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2131931' 00:23:42.829 killing process with pid 2131931 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2131931 00:23:42.829 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2131931 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:43.088 06:34:14 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:44.993 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:23:44.993 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:44.993 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:44.993 06:34:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:45.929 06:34:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:47.830 06:34:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:23:53.107 Found 0000:09:00.0 (0x8086 - 0x159b) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:23:53.107 Found 0000:09:00.1 (0x8086 - 0x159b) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.107 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:23:53.108 Found net devices under 0000:09:00.0: cvl_0_0 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:23:53.108 Found net devices under 0000:09:00.1: cvl_0_1 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:23:53.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:23:53.108 00:23:53.108 --- 10.0.0.2 ping statistics --- 00:23:53.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.108 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:23:53.108 00:23:53.108 --- 10.0.0.1 ping statistics --- 00:23:53.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.108 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:53.108 net.core.busy_poll = 1 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:53.108 net.core.busy_read = 1 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=2134578 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 2134578 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # '[' -z 2134578 ']' 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # local max_retries=100 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # xtrace_disable 00:23:53.108 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.108 [2024-11-20 06:34:24.758906] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:23:53.108 [2024-11-20 06:34:24.758980] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.108 [2024-11-20 06:34:24.831869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:53.108 [2024-11-20 06:34:24.890894] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.108 [2024-11-20 06:34:24.890940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.108 [2024-11-20 06:34:24.890963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.108 [2024-11-20 06:34:24.890974] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.108 [2024-11-20 06:34:24.890984] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.108 [2024-11-20 06:34:24.892482] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.108 [2024-11-20 06:34:24.892541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:53.108 [2024-11-20 06:34:24.892618] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:53.108 [2024-11-20 06:34:24.892621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.367 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:23:53.367 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@866 -- # return 0 00:23:53.367 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:53.367 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:53.367 06:34:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.367 [2024-11-20 06:34:25.173274] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.367 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.625 Malloc1 00:23:53.625 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.625 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:53.625 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.625 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.625 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.625 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:53.625 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.626 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.626 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.626 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:53.626 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:53.626 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:53.626 [2024-11-20 06:34:25.241369] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.626 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:53.626 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2134730 00:23:53.626 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:53.626 06:34:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:55.527 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:55.527 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:55.527 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:55.527 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:55.527 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:55.527 "tick_rate": 2700000000, 00:23:55.527 "poll_groups": [ 00:23:55.527 { 00:23:55.527 "name": "nvmf_tgt_poll_group_000", 00:23:55.527 "admin_qpairs": 1, 00:23:55.527 "io_qpairs": 0, 00:23:55.527 "current_admin_qpairs": 1, 00:23:55.527 "current_io_qpairs": 0, 00:23:55.527 "pending_bdev_io": 0, 00:23:55.527 "completed_nvme_io": 0, 00:23:55.527 "transports": [ 00:23:55.527 { 00:23:55.527 "trtype": "TCP" 00:23:55.527 } 00:23:55.527 ] 00:23:55.527 }, 00:23:55.527 { 00:23:55.527 "name": "nvmf_tgt_poll_group_001", 00:23:55.528 "admin_qpairs": 0, 00:23:55.528 "io_qpairs": 4, 00:23:55.528 "current_admin_qpairs": 0, 00:23:55.528 "current_io_qpairs": 4, 00:23:55.528 "pending_bdev_io": 0, 00:23:55.528 "completed_nvme_io": 33854, 00:23:55.528 "transports": [ 00:23:55.528 { 00:23:55.528 "trtype": "TCP" 00:23:55.528 } 00:23:55.528 ] 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "name": "nvmf_tgt_poll_group_002", 00:23:55.528 "admin_qpairs": 0, 00:23:55.528 "io_qpairs": 0, 00:23:55.528 "current_admin_qpairs": 0, 00:23:55.528 "current_io_qpairs": 0, 00:23:55.528 "pending_bdev_io": 0, 00:23:55.528 "completed_nvme_io": 0, 00:23:55.528 "transports": [ 00:23:55.528 { 00:23:55.528 "trtype": "TCP" 00:23:55.528 } 00:23:55.528 ] 00:23:55.528 }, 00:23:55.528 { 00:23:55.528 "name": "nvmf_tgt_poll_group_003", 00:23:55.528 "admin_qpairs": 0, 00:23:55.528 "io_qpairs": 0, 00:23:55.528 "current_admin_qpairs": 0, 00:23:55.528 "current_io_qpairs": 0, 00:23:55.528 "pending_bdev_io": 0, 00:23:55.528 "completed_nvme_io": 0, 00:23:55.528 "transports": [ 00:23:55.528 { 00:23:55.528 "trtype": "TCP" 00:23:55.528 } 00:23:55.528 ] 00:23:55.528 } 00:23:55.528 ] 00:23:55.528 }' 00:23:55.528 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:55.528 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:55.528 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=3 00:23:55.528 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 3 -lt 2 ]] 00:23:55.528 06:34:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2134730 00:24:03.633 Initializing NVMe Controllers 00:24:03.633 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:03.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:24:03.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:24:03.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:24:03.633 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:24:03.633 Initialization complete. Launching workers. 00:24:03.633 ======================================================== 00:24:03.633 Latency(us) 00:24:03.633 Device Information : IOPS MiB/s Average min max 00:24:03.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4471.40 17.47 14348.54 1867.26 62328.70 00:24:03.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4364.20 17.05 14683.15 1526.90 61374.39 00:24:03.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4215.00 16.46 15190.45 1569.66 61714.20 00:24:03.633 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 4874.10 19.04 13134.89 1871.65 58431.84 00:24:03.633 ======================================================== 00:24:03.633 Total : 17924.70 70.02 14297.97 1526.90 62328.70 00:24:03.633 00:24:03.633 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:24:03.633 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:03.633 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:24:03.633 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:03.633 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:24:03.633 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:03.634 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:03.634 rmmod nvme_tcp 00:24:03.634 rmmod nvme_fabrics 00:24:03.634 rmmod nvme_keyring 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 2134578 ']' 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 2134578 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # '[' -z 2134578 ']' 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # kill -0 2134578 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # uname 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2134578 00:24:03.891 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:03.892 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:03.892 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2134578' 00:24:03.892 killing process with pid 2134578 00:24:03.892 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@971 -- # kill 2134578 00:24:03.892 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@976 -- # wait 2134578 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.150 06:34:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:24:07.439 00:24:07.439 real 0m45.192s 00:24:07.439 user 2m39.148s 00:24:07.439 sys 0m10.318s 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:24:07.439 ************************************ 00:24:07.439 END TEST nvmf_perf_adq 00:24:07.439 ************************************ 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:07.439 ************************************ 00:24:07.439 START TEST nvmf_shutdown 00:24:07.439 ************************************ 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:24:07.439 * Looking for test storage... 00:24:07.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lcov --version 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:07.439 06:34:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:24:07.439 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:24:07.439 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:07.439 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:24:07.439 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:24:07.439 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:24:07.439 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:24:07.439 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:07.439 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:24:07.439 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:07.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.440 --rc genhtml_branch_coverage=1 00:24:07.440 --rc genhtml_function_coverage=1 00:24:07.440 --rc genhtml_legend=1 00:24:07.440 --rc geninfo_all_blocks=1 00:24:07.440 --rc geninfo_unexecuted_blocks=1 00:24:07.440 00:24:07.440 ' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:07.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.440 --rc genhtml_branch_coverage=1 00:24:07.440 --rc genhtml_function_coverage=1 00:24:07.440 --rc genhtml_legend=1 00:24:07.440 --rc geninfo_all_blocks=1 00:24:07.440 --rc geninfo_unexecuted_blocks=1 00:24:07.440 00:24:07.440 ' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:07.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.440 --rc genhtml_branch_coverage=1 00:24:07.440 --rc genhtml_function_coverage=1 00:24:07.440 --rc genhtml_legend=1 00:24:07.440 --rc geninfo_all_blocks=1 00:24:07.440 --rc geninfo_unexecuted_blocks=1 00:24:07.440 00:24:07.440 ' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:07.440 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:07.440 --rc genhtml_branch_coverage=1 00:24:07.440 --rc genhtml_function_coverage=1 00:24:07.440 --rc genhtml_legend=1 00:24:07.440 --rc geninfo_all_blocks=1 00:24:07.440 --rc geninfo_unexecuted_blocks=1 00:24:07.440 00:24:07.440 ' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:07.440 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:07.440 ************************************ 00:24:07.440 START TEST nvmf_shutdown_tc1 00:24:07.440 ************************************ 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc1 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:07.440 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:07.441 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.441 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:07.441 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.441 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:07.441 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:07.441 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:07.441 06:34:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:09.977 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:09.977 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:09.977 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:09.978 Found net devices under 0000:09:00.0: cvl_0_0 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:09.978 Found net devices under 0000:09:00.1: cvl_0_1 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:09.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:24:09.978 00:24:09.978 --- 10.0.0.2 ping statistics --- 00:24:09.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.978 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:24:09.978 00:24:09.978 --- 10.0.0.1 ping statistics --- 00:24:09.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.978 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=2138039 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 2138039 00:24:09.978 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2138039 ']' 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.979 [2024-11-20 06:34:41.475022] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:09.979 [2024-11-20 06:34:41.475101] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.979 [2024-11-20 06:34:41.548505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:09.979 [2024-11-20 06:34:41.609029] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.979 [2024-11-20 06:34:41.609098] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.979 [2024-11-20 06:34:41.609118] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.979 [2024-11-20 06:34:41.609129] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.979 [2024-11-20 06:34:41.609139] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.979 [2024-11-20 06:34:41.610777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.979 [2024-11-20 06:34:41.610841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:09.979 [2024-11-20 06:34:41.610909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:09.979 [2024-11-20 06:34:41.610912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.979 [2024-11-20 06:34:41.760984] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:09.979 06:34:41 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:10.237 Malloc1 00:24:10.237 [2024-11-20 06:34:41.847780] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:10.237 Malloc2 00:24:10.237 Malloc3 00:24:10.237 Malloc4 00:24:10.237 Malloc5 00:24:10.237 Malloc6 00:24:10.496 Malloc7 00:24:10.496 Malloc8 00:24:10.496 Malloc9 00:24:10.496 Malloc10 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2138216 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2138216 /var/tmp/bdevperf.sock 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # '[' -z 2138216 ']' 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:10.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.496 { 00:24:10.496 "params": { 00:24:10.496 "name": "Nvme$subsystem", 00:24:10.496 "trtype": "$TEST_TRANSPORT", 00:24:10.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.496 "adrfam": "ipv4", 00:24:10.496 "trsvcid": "$NVMF_PORT", 00:24:10.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.496 "hdgst": ${hdgst:-false}, 00:24:10.496 "ddgst": ${ddgst:-false} 00:24:10.496 }, 00:24:10.496 "method": "bdev_nvme_attach_controller" 00:24:10.496 } 00:24:10.496 EOF 00:24:10.496 )") 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.496 { 00:24:10.496 "params": { 00:24:10.496 "name": "Nvme$subsystem", 00:24:10.496 "trtype": "$TEST_TRANSPORT", 00:24:10.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.496 "adrfam": "ipv4", 00:24:10.496 "trsvcid": "$NVMF_PORT", 00:24:10.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.496 "hdgst": ${hdgst:-false}, 00:24:10.496 "ddgst": ${ddgst:-false} 00:24:10.496 }, 00:24:10.496 "method": "bdev_nvme_attach_controller" 00:24:10.496 } 00:24:10.496 EOF 00:24:10.496 )") 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.496 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.496 { 00:24:10.496 "params": { 00:24:10.496 "name": "Nvme$subsystem", 00:24:10.496 "trtype": "$TEST_TRANSPORT", 00:24:10.496 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.496 "adrfam": "ipv4", 00:24:10.496 "trsvcid": "$NVMF_PORT", 00:24:10.496 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.496 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.496 "hdgst": ${hdgst:-false}, 00:24:10.496 "ddgst": ${ddgst:-false} 00:24:10.496 }, 00:24:10.496 "method": "bdev_nvme_attach_controller" 00:24:10.497 } 00:24:10.497 EOF 00:24:10.497 )") 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.497 { 00:24:10.497 "params": { 00:24:10.497 "name": "Nvme$subsystem", 00:24:10.497 "trtype": "$TEST_TRANSPORT", 00:24:10.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.497 "adrfam": "ipv4", 00:24:10.497 "trsvcid": "$NVMF_PORT", 00:24:10.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.497 "hdgst": ${hdgst:-false}, 00:24:10.497 "ddgst": ${ddgst:-false} 00:24:10.497 }, 00:24:10.497 "method": "bdev_nvme_attach_controller" 00:24:10.497 } 00:24:10.497 EOF 00:24:10.497 )") 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.497 { 00:24:10.497 "params": { 00:24:10.497 "name": "Nvme$subsystem", 00:24:10.497 "trtype": "$TEST_TRANSPORT", 00:24:10.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.497 "adrfam": "ipv4", 00:24:10.497 "trsvcid": "$NVMF_PORT", 00:24:10.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.497 "hdgst": ${hdgst:-false}, 00:24:10.497 "ddgst": ${ddgst:-false} 00:24:10.497 }, 00:24:10.497 "method": "bdev_nvme_attach_controller" 00:24:10.497 } 00:24:10.497 EOF 00:24:10.497 )") 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.497 { 00:24:10.497 "params": { 00:24:10.497 "name": "Nvme$subsystem", 00:24:10.497 "trtype": "$TEST_TRANSPORT", 00:24:10.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.497 "adrfam": "ipv4", 00:24:10.497 "trsvcid": "$NVMF_PORT", 00:24:10.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.497 "hdgst": ${hdgst:-false}, 00:24:10.497 "ddgst": ${ddgst:-false} 00:24:10.497 }, 00:24:10.497 "method": "bdev_nvme_attach_controller" 00:24:10.497 } 00:24:10.497 EOF 00:24:10.497 )") 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.497 { 00:24:10.497 "params": { 00:24:10.497 "name": "Nvme$subsystem", 00:24:10.497 "trtype": "$TEST_TRANSPORT", 00:24:10.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.497 "adrfam": "ipv4", 00:24:10.497 "trsvcid": "$NVMF_PORT", 00:24:10.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.497 "hdgst": ${hdgst:-false}, 00:24:10.497 "ddgst": ${ddgst:-false} 00:24:10.497 }, 00:24:10.497 "method": "bdev_nvme_attach_controller" 00:24:10.497 } 00:24:10.497 EOF 00:24:10.497 )") 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.497 { 00:24:10.497 "params": { 00:24:10.497 "name": "Nvme$subsystem", 00:24:10.497 "trtype": "$TEST_TRANSPORT", 00:24:10.497 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.497 "adrfam": "ipv4", 00:24:10.497 "trsvcid": "$NVMF_PORT", 00:24:10.497 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.497 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.497 "hdgst": ${hdgst:-false}, 00:24:10.497 "ddgst": ${ddgst:-false} 00:24:10.497 }, 00:24:10.497 "method": "bdev_nvme_attach_controller" 00:24:10.497 } 00:24:10.497 EOF 00:24:10.497 )") 00:24:10.497 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.755 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.755 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.755 { 00:24:10.755 "params": { 00:24:10.755 "name": "Nvme$subsystem", 00:24:10.756 "trtype": "$TEST_TRANSPORT", 00:24:10.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "$NVMF_PORT", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.756 "hdgst": ${hdgst:-false}, 00:24:10.756 "ddgst": ${ddgst:-false} 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 } 00:24:10.756 EOF 00:24:10.756 )") 00:24:10.756 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.756 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:10.756 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:10.756 { 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme$subsystem", 00:24:10.756 "trtype": "$TEST_TRANSPORT", 00:24:10.756 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "$NVMF_PORT", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:10.756 "hdgst": ${hdgst:-false}, 00:24:10.756 "ddgst": ${ddgst:-false} 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 } 00:24:10.756 EOF 00:24:10.756 )") 00:24:10.756 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:10.756 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:10.756 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:10.756 06:34:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme1", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 },{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme2", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 },{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme3", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 },{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme4", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 },{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme5", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 },{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme6", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 },{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme7", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 },{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme8", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 },{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme9", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 },{ 00:24:10.756 "params": { 00:24:10.756 "name": "Nvme10", 00:24:10.756 "trtype": "tcp", 00:24:10.756 "traddr": "10.0.0.2", 00:24:10.756 "adrfam": "ipv4", 00:24:10.756 "trsvcid": "4420", 00:24:10.756 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:10.756 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:10.756 "hdgst": false, 00:24:10.756 "ddgst": false 00:24:10.756 }, 00:24:10.756 "method": "bdev_nvme_attach_controller" 00:24:10.756 }' 00:24:10.756 [2024-11-20 06:34:42.347964] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:10.756 [2024-11-20 06:34:42.348056] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:24:10.756 [2024-11-20 06:34:42.420554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:10.756 [2024-11-20 06:34:42.480845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:12.742 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:12.742 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@866 -- # return 0 00:24:12.742 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:12.742 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:12.742 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:12.742 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:12.742 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2138216 00:24:12.742 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:24:12.742 06:34:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:24:13.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2138216 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2138039 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.675 { 00:24:13.675 "params": { 00:24:13.675 "name": "Nvme$subsystem", 00:24:13.675 "trtype": "$TEST_TRANSPORT", 00:24:13.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.675 "adrfam": "ipv4", 00:24:13.675 "trsvcid": "$NVMF_PORT", 00:24:13.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.675 "hdgst": ${hdgst:-false}, 00:24:13.675 "ddgst": ${ddgst:-false} 00:24:13.675 }, 00:24:13.675 "method": "bdev_nvme_attach_controller" 00:24:13.675 } 00:24:13.675 EOF 00:24:13.675 )") 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.675 { 00:24:13.675 "params": { 00:24:13.675 "name": "Nvme$subsystem", 00:24:13.675 "trtype": "$TEST_TRANSPORT", 00:24:13.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.675 "adrfam": "ipv4", 00:24:13.675 "trsvcid": "$NVMF_PORT", 00:24:13.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.675 "hdgst": ${hdgst:-false}, 00:24:13.675 "ddgst": ${ddgst:-false} 00:24:13.675 }, 00:24:13.675 "method": "bdev_nvme_attach_controller" 00:24:13.675 } 00:24:13.675 EOF 00:24:13.675 )") 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.675 { 00:24:13.675 "params": { 00:24:13.675 "name": "Nvme$subsystem", 00:24:13.675 "trtype": "$TEST_TRANSPORT", 00:24:13.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.675 "adrfam": "ipv4", 00:24:13.675 "trsvcid": "$NVMF_PORT", 00:24:13.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.675 "hdgst": ${hdgst:-false}, 00:24:13.675 "ddgst": ${ddgst:-false} 00:24:13.675 }, 00:24:13.675 "method": "bdev_nvme_attach_controller" 00:24:13.675 } 00:24:13.675 EOF 00:24:13.675 )") 00:24:13.675 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.676 { 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme$subsystem", 00:24:13.676 "trtype": "$TEST_TRANSPORT", 00:24:13.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "$NVMF_PORT", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.676 "hdgst": ${hdgst:-false}, 00:24:13.676 "ddgst": ${ddgst:-false} 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 } 00:24:13.676 EOF 00:24:13.676 )") 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.676 { 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme$subsystem", 00:24:13.676 "trtype": "$TEST_TRANSPORT", 00:24:13.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "$NVMF_PORT", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.676 "hdgst": ${hdgst:-false}, 00:24:13.676 "ddgst": ${ddgst:-false} 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 } 00:24:13.676 EOF 00:24:13.676 )") 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.676 { 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme$subsystem", 00:24:13.676 "trtype": "$TEST_TRANSPORT", 00:24:13.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "$NVMF_PORT", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.676 "hdgst": ${hdgst:-false}, 00:24:13.676 "ddgst": ${ddgst:-false} 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 } 00:24:13.676 EOF 00:24:13.676 )") 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.676 { 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme$subsystem", 00:24:13.676 "trtype": "$TEST_TRANSPORT", 00:24:13.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "$NVMF_PORT", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.676 "hdgst": ${hdgst:-false}, 00:24:13.676 "ddgst": ${ddgst:-false} 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 } 00:24:13.676 EOF 00:24:13.676 )") 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.676 { 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme$subsystem", 00:24:13.676 "trtype": "$TEST_TRANSPORT", 00:24:13.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "$NVMF_PORT", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.676 "hdgst": ${hdgst:-false}, 00:24:13.676 "ddgst": ${ddgst:-false} 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 } 00:24:13.676 EOF 00:24:13.676 )") 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.676 { 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme$subsystem", 00:24:13.676 "trtype": "$TEST_TRANSPORT", 00:24:13.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "$NVMF_PORT", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.676 "hdgst": ${hdgst:-false}, 00:24:13.676 "ddgst": ${ddgst:-false} 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 } 00:24:13.676 EOF 00:24:13.676 )") 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:13.676 { 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme$subsystem", 00:24:13.676 "trtype": "$TEST_TRANSPORT", 00:24:13.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "$NVMF_PORT", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:13.676 "hdgst": ${hdgst:-false}, 00:24:13.676 "ddgst": ${ddgst:-false} 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 } 00:24:13.676 EOF 00:24:13.676 )") 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:24:13.676 06:34:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme1", 00:24:13.676 "trtype": "tcp", 00:24:13.676 "traddr": "10.0.0.2", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "4420", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:13.676 "hdgst": false, 00:24:13.676 "ddgst": false 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 },{ 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme2", 00:24:13.676 "trtype": "tcp", 00:24:13.676 "traddr": "10.0.0.2", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "4420", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:13.676 "hdgst": false, 00:24:13.676 "ddgst": false 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 },{ 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme3", 00:24:13.676 "trtype": "tcp", 00:24:13.676 "traddr": "10.0.0.2", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "4420", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:13.676 "hdgst": false, 00:24:13.676 "ddgst": false 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 },{ 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme4", 00:24:13.676 "trtype": "tcp", 00:24:13.676 "traddr": "10.0.0.2", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "4420", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:13.676 "hdgst": false, 00:24:13.676 "ddgst": false 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 },{ 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme5", 00:24:13.676 "trtype": "tcp", 00:24:13.676 "traddr": "10.0.0.2", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "4420", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:13.676 "hdgst": false, 00:24:13.676 "ddgst": false 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 },{ 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme6", 00:24:13.676 "trtype": "tcp", 00:24:13.676 "traddr": "10.0.0.2", 00:24:13.676 "adrfam": "ipv4", 00:24:13.676 "trsvcid": "4420", 00:24:13.676 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:13.676 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:13.676 "hdgst": false, 00:24:13.676 "ddgst": false 00:24:13.676 }, 00:24:13.676 "method": "bdev_nvme_attach_controller" 00:24:13.676 },{ 00:24:13.676 "params": { 00:24:13.676 "name": "Nvme7", 00:24:13.676 "trtype": "tcp", 00:24:13.676 "traddr": "10.0.0.2", 00:24:13.676 "adrfam": "ipv4", 00:24:13.677 "trsvcid": "4420", 00:24:13.677 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:13.677 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:13.677 "hdgst": false, 00:24:13.677 "ddgst": false 00:24:13.677 }, 00:24:13.677 "method": "bdev_nvme_attach_controller" 00:24:13.677 },{ 00:24:13.677 "params": { 00:24:13.677 "name": "Nvme8", 00:24:13.677 "trtype": "tcp", 00:24:13.677 "traddr": "10.0.0.2", 00:24:13.677 "adrfam": "ipv4", 00:24:13.677 "trsvcid": "4420", 00:24:13.677 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:13.677 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:13.677 "hdgst": false, 00:24:13.677 "ddgst": false 00:24:13.677 }, 00:24:13.677 "method": "bdev_nvme_attach_controller" 00:24:13.677 },{ 00:24:13.677 "params": { 00:24:13.677 "name": "Nvme9", 00:24:13.677 "trtype": "tcp", 00:24:13.677 "traddr": "10.0.0.2", 00:24:13.677 "adrfam": "ipv4", 00:24:13.677 "trsvcid": "4420", 00:24:13.677 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:13.677 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:13.677 "hdgst": false, 00:24:13.677 "ddgst": false 00:24:13.677 }, 00:24:13.677 "method": "bdev_nvme_attach_controller" 00:24:13.677 },{ 00:24:13.677 "params": { 00:24:13.677 "name": "Nvme10", 00:24:13.677 "trtype": "tcp", 00:24:13.677 "traddr": "10.0.0.2", 00:24:13.677 "adrfam": "ipv4", 00:24:13.677 "trsvcid": "4420", 00:24:13.677 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:13.677 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:13.677 "hdgst": false, 00:24:13.677 "ddgst": false 00:24:13.677 }, 00:24:13.677 "method": "bdev_nvme_attach_controller" 00:24:13.677 }' 00:24:13.677 [2024-11-20 06:34:45.412325] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:13.677 [2024-11-20 06:34:45.412415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138640 ] 00:24:13.677 [2024-11-20 06:34:45.485131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.935 [2024-11-20 06:34:45.546878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:15.307 Running I/O for 1 seconds... 00:24:16.240 1823.00 IOPS, 113.94 MiB/s 00:24:16.240 Latency(us) 00:24:16.240 [2024-11-20T05:34:48.076Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.240 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.240 Verification LBA range: start 0x0 length 0x400 00:24:16.240 Nvme1n1 : 1.16 221.42 13.84 0.00 0.00 284878.13 22622.06 256318.58 00:24:16.240 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.240 Verification LBA range: start 0x0 length 0x400 00:24:16.240 Nvme2n1 : 1.12 232.81 14.55 0.00 0.00 260501.28 18447.17 251658.24 00:24:16.240 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.240 Verification LBA range: start 0x0 length 0x400 00:24:16.240 Nvme3n1 : 1.12 231.73 14.48 0.00 0.00 258751.44 20486.07 256318.58 00:24:16.240 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.240 Verification LBA range: start 0x0 length 0x400 00:24:16.240 Nvme4n1 : 1.13 230.38 14.40 0.00 0.00 260784.22 5388.52 260978.92 00:24:16.240 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.240 Verification LBA range: start 0x0 length 0x400 00:24:16.240 Nvme5n1 : 1.19 215.97 13.50 0.00 0.00 274944.76 22719.15 273406.48 00:24:16.240 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.240 Verification LBA range: start 0x0 length 0x400 00:24:16.240 Nvme6n1 : 1.16 221.24 13.83 0.00 0.00 263101.06 19029.71 259425.47 00:24:16.240 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.240 Verification LBA range: start 0x0 length 0x400 00:24:16.240 Nvme7n1 : 1.16 280.97 17.56 0.00 0.00 203222.50 7475.96 243891.01 00:24:16.240 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.240 Verification LBA range: start 0x0 length 0x400 00:24:16.240 Nvme8n1 : 1.15 226.65 14.17 0.00 0.00 247576.85 2961.26 245444.46 00:24:16.240 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.240 Verification LBA range: start 0x0 length 0x400 00:24:16.240 Nvme9n1 : 1.19 214.42 13.40 0.00 0.00 259375.98 22330.79 298261.62 00:24:16.240 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:16.240 Verification LBA range: start 0x0 length 0x400 00:24:16.240 Nvme10n1 : 1.20 265.57 16.60 0.00 0.00 206061.91 5752.60 260978.92 00:24:16.240 [2024-11-20T05:34:48.076Z] =================================================================================================================== 00:24:16.240 [2024-11-20T05:34:48.076Z] Total : 2341.16 146.32 0.00 0.00 249616.52 2961.26 298261.62 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:16.497 rmmod nvme_tcp 00:24:16.497 rmmod nvme_fabrics 00:24:16.497 rmmod nvme_keyring 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 2138039 ']' 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 2138039 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # '[' -z 2138039 ']' 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # kill -0 2138039 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # uname 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2138039 00:24:16.497 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:16.498 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:16.498 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2138039' 00:24:16.498 killing process with pid 2138039 00:24:16.498 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@971 -- # kill 2138039 00:24:16.498 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@976 -- # wait 2138039 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:17.063 06:34:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:19.601 00:24:19.601 real 0m11.786s 00:24:19.601 user 0m33.335s 00:24:19.601 sys 0m3.373s 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:19.601 ************************************ 00:24:19.601 END TEST nvmf_shutdown_tc1 00:24:19.601 ************************************ 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:19.601 ************************************ 00:24:19.601 START TEST nvmf_shutdown_tc2 00:24:19.601 ************************************ 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc2 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:19.601 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:19.602 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:19.603 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:19.603 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:19.603 Found net devices under 0000:09:00.0: cvl_0_0 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:19.603 Found net devices under 0000:09:00.1: cvl_0_1 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:19.603 06:34:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:19.604 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:19.604 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:24:19.604 00:24:19.604 --- 10.0.0.2 ping statistics --- 00:24:19.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.604 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:19.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:19.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:24:19.604 00:24:19.604 --- 10.0.0.1 ping statistics --- 00:24:19.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:19.604 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2139394 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2139394 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2139394 ']' 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:19.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.604 [2024-11-20 06:34:51.145915] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:19.604 [2024-11-20 06:34:51.145982] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:19.604 [2024-11-20 06:34:51.214022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:19.604 [2024-11-20 06:34:51.269246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:19.604 [2024-11-20 06:34:51.269300] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:19.604 [2024-11-20 06:34:51.269332] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:19.604 [2024-11-20 06:34:51.269342] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:19.604 [2024-11-20 06:34:51.269351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:19.604 [2024-11-20 06:34:51.270786] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:19.604 [2024-11-20 06:34:51.270849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:19.604 [2024-11-20 06:34:51.270912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:19.604 [2024-11-20 06:34:51.270915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.604 [2024-11-20 06:34:51.412914] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:19.604 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:19.864 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:19.864 Malloc1 00:24:19.864 [2024-11-20 06:34:51.517664] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:19.864 Malloc2 00:24:19.864 Malloc3 00:24:19.864 Malloc4 00:24:19.864 Malloc5 00:24:20.122 Malloc6 00:24:20.122 Malloc7 00:24:20.122 Malloc8 00:24:20.122 Malloc9 00:24:20.122 Malloc10 00:24:20.381 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:20.381 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:20.381 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:20.381 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.381 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2139472 00:24:20.381 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2139472 /var/tmp/bdevperf.sock 00:24:20.381 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2139472 ']' 00:24:20.381 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:20.381 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:20.381 06:34:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:20.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.381 { 00:24:20.381 "params": { 00:24:20.381 "name": "Nvme$subsystem", 00:24:20.381 "trtype": "$TEST_TRANSPORT", 00:24:20.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.381 "adrfam": "ipv4", 00:24:20.381 "trsvcid": "$NVMF_PORT", 00:24:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.381 "hdgst": ${hdgst:-false}, 00:24:20.381 "ddgst": ${ddgst:-false} 00:24:20.381 }, 00:24:20.381 "method": "bdev_nvme_attach_controller" 00:24:20.381 } 00:24:20.381 EOF 00:24:20.381 )") 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.381 { 00:24:20.381 "params": { 00:24:20.381 "name": "Nvme$subsystem", 00:24:20.381 "trtype": "$TEST_TRANSPORT", 00:24:20.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.381 "adrfam": "ipv4", 00:24:20.381 "trsvcid": "$NVMF_PORT", 00:24:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.381 "hdgst": ${hdgst:-false}, 00:24:20.381 "ddgst": ${ddgst:-false} 00:24:20.381 }, 00:24:20.381 "method": "bdev_nvme_attach_controller" 00:24:20.381 } 00:24:20.381 EOF 00:24:20.381 )") 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.381 { 00:24:20.381 "params": { 00:24:20.381 "name": "Nvme$subsystem", 00:24:20.381 "trtype": "$TEST_TRANSPORT", 00:24:20.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.381 "adrfam": "ipv4", 00:24:20.381 "trsvcid": "$NVMF_PORT", 00:24:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.381 "hdgst": ${hdgst:-false}, 00:24:20.381 "ddgst": ${ddgst:-false} 00:24:20.381 }, 00:24:20.381 "method": "bdev_nvme_attach_controller" 00:24:20.381 } 00:24:20.381 EOF 00:24:20.381 )") 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.381 { 00:24:20.381 "params": { 00:24:20.381 "name": "Nvme$subsystem", 00:24:20.381 "trtype": "$TEST_TRANSPORT", 00:24:20.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.381 "adrfam": "ipv4", 00:24:20.381 "trsvcid": "$NVMF_PORT", 00:24:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.381 "hdgst": ${hdgst:-false}, 00:24:20.381 "ddgst": ${ddgst:-false} 00:24:20.381 }, 00:24:20.381 "method": "bdev_nvme_attach_controller" 00:24:20.381 } 00:24:20.381 EOF 00:24:20.381 )") 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.381 { 00:24:20.381 "params": { 00:24:20.381 "name": "Nvme$subsystem", 00:24:20.381 "trtype": "$TEST_TRANSPORT", 00:24:20.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.381 "adrfam": "ipv4", 00:24:20.381 "trsvcid": "$NVMF_PORT", 00:24:20.381 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.381 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.381 "hdgst": ${hdgst:-false}, 00:24:20.381 "ddgst": ${ddgst:-false} 00:24:20.381 }, 00:24:20.381 "method": "bdev_nvme_attach_controller" 00:24:20.381 } 00:24:20.381 EOF 00:24:20.381 )") 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.381 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.381 { 00:24:20.381 "params": { 00:24:20.381 "name": "Nvme$subsystem", 00:24:20.381 "trtype": "$TEST_TRANSPORT", 00:24:20.381 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.381 "adrfam": "ipv4", 00:24:20.381 "trsvcid": "$NVMF_PORT", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.382 "hdgst": ${hdgst:-false}, 00:24:20.382 "ddgst": ${ddgst:-false} 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 } 00:24:20.382 EOF 00:24:20.382 )") 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.382 { 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme$subsystem", 00:24:20.382 "trtype": "$TEST_TRANSPORT", 00:24:20.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "$NVMF_PORT", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.382 "hdgst": ${hdgst:-false}, 00:24:20.382 "ddgst": ${ddgst:-false} 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 } 00:24:20.382 EOF 00:24:20.382 )") 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.382 { 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme$subsystem", 00:24:20.382 "trtype": "$TEST_TRANSPORT", 00:24:20.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "$NVMF_PORT", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.382 "hdgst": ${hdgst:-false}, 00:24:20.382 "ddgst": ${ddgst:-false} 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 } 00:24:20.382 EOF 00:24:20.382 )") 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.382 { 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme$subsystem", 00:24:20.382 "trtype": "$TEST_TRANSPORT", 00:24:20.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "$NVMF_PORT", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.382 "hdgst": ${hdgst:-false}, 00:24:20.382 "ddgst": ${ddgst:-false} 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 } 00:24:20.382 EOF 00:24:20.382 )") 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:20.382 { 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme$subsystem", 00:24:20.382 "trtype": "$TEST_TRANSPORT", 00:24:20.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "$NVMF_PORT", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:20.382 "hdgst": ${hdgst:-false}, 00:24:20.382 "ddgst": ${ddgst:-false} 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 } 00:24:20.382 EOF 00:24:20.382 )") 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:24:20.382 06:34:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme1", 00:24:20.382 "trtype": "tcp", 00:24:20.382 "traddr": "10.0.0.2", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "4420", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:20.382 "hdgst": false, 00:24:20.382 "ddgst": false 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 },{ 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme2", 00:24:20.382 "trtype": "tcp", 00:24:20.382 "traddr": "10.0.0.2", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "4420", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:20.382 "hdgst": false, 00:24:20.382 "ddgst": false 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 },{ 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme3", 00:24:20.382 "trtype": "tcp", 00:24:20.382 "traddr": "10.0.0.2", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "4420", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:20.382 "hdgst": false, 00:24:20.382 "ddgst": false 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 },{ 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme4", 00:24:20.382 "trtype": "tcp", 00:24:20.382 "traddr": "10.0.0.2", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "4420", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:20.382 "hdgst": false, 00:24:20.382 "ddgst": false 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 },{ 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme5", 00:24:20.382 "trtype": "tcp", 00:24:20.382 "traddr": "10.0.0.2", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "4420", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:20.382 "hdgst": false, 00:24:20.382 "ddgst": false 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 },{ 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme6", 00:24:20.382 "trtype": "tcp", 00:24:20.382 "traddr": "10.0.0.2", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "4420", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:20.382 "hdgst": false, 00:24:20.382 "ddgst": false 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 },{ 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme7", 00:24:20.382 "trtype": "tcp", 00:24:20.382 "traddr": "10.0.0.2", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "4420", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:20.382 "hdgst": false, 00:24:20.382 "ddgst": false 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 },{ 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme8", 00:24:20.382 "trtype": "tcp", 00:24:20.382 "traddr": "10.0.0.2", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "4420", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:20.382 "hdgst": false, 00:24:20.382 "ddgst": false 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 },{ 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme9", 00:24:20.382 "trtype": "tcp", 00:24:20.382 "traddr": "10.0.0.2", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "4420", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:20.382 "hdgst": false, 00:24:20.382 "ddgst": false 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 },{ 00:24:20.382 "params": { 00:24:20.382 "name": "Nvme10", 00:24:20.382 "trtype": "tcp", 00:24:20.382 "traddr": "10.0.0.2", 00:24:20.382 "adrfam": "ipv4", 00:24:20.382 "trsvcid": "4420", 00:24:20.382 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:20.382 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:20.382 "hdgst": false, 00:24:20.382 "ddgst": false 00:24:20.382 }, 00:24:20.382 "method": "bdev_nvme_attach_controller" 00:24:20.382 }' 00:24:20.382 [2024-11-20 06:34:52.048502] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:20.382 [2024-11-20 06:34:52.048586] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2139472 ] 00:24:20.382 [2024-11-20 06:34:52.120877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:20.382 [2024-11-20 06:34:52.182538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:22.279 Running I/O for 10 seconds... 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@866 -- # return 0 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:22.537 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:22.538 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:22.795 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:22.795 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:22.795 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:22.795 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:22.795 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.795 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:22.795 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.795 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=72 00:24:22.795 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 72 -ge 100 ']' 00:24:22.795 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2139472 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2139472 ']' 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2139472 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2139472 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2139472' 00:24:23.053 killing process with pid 2139472 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2139472 00:24:23.053 06:34:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2139472 00:24:23.311 Received shutdown signal, test time was about 0.968979 seconds 00:24:23.311 00:24:23.311 Latency(us) 00:24:23.311 [2024-11-20T05:34:55.147Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:23.311 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:23.311 Verification LBA range: start 0x0 length 0x400 00:24:23.311 Nvme1n1 : 0.95 274.21 17.14 0.00 0.00 230067.12 3640.89 254765.13 00:24:23.311 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:23.311 Verification LBA range: start 0x0 length 0x400 00:24:23.311 Nvme2n1 : 0.94 204.91 12.81 0.00 0.00 302291.12 23690.05 256318.58 00:24:23.311 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:23.311 Verification LBA range: start 0x0 length 0x400 00:24:23.311 Nvme3n1 : 0.97 265.17 16.57 0.00 0.00 229426.63 18738.44 234570.33 00:24:23.311 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:23.311 Verification LBA range: start 0x0 length 0x400 00:24:23.311 Nvme4n1 : 0.96 270.64 16.91 0.00 0.00 219877.85 1711.22 259425.47 00:24:23.311 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:23.311 Verification LBA range: start 0x0 length 0x400 00:24:23.311 Nvme5n1 : 0.92 213.67 13.35 0.00 0.00 269636.04 4757.43 260978.92 00:24:23.311 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:23.311 Verification LBA range: start 0x0 length 0x400 00:24:23.311 Nvme6n1 : 0.97 264.42 16.53 0.00 0.00 216283.21 19223.89 254765.13 00:24:23.311 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:23.311 Verification LBA range: start 0x0 length 0x400 00:24:23.311 Nvme7n1 : 0.93 205.41 12.84 0.00 0.00 271269.99 18252.99 259425.47 00:24:23.311 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:23.311 Verification LBA range: start 0x0 length 0x400 00:24:23.311 Nvme8n1 : 0.93 206.52 12.91 0.00 0.00 263999.08 20874.43 239230.67 00:24:23.311 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:23.311 Verification LBA range: start 0x0 length 0x400 00:24:23.311 Nvme9n1 : 0.95 203.00 12.69 0.00 0.00 263521.22 20583.16 262532.36 00:24:23.311 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:23.311 Verification LBA range: start 0x0 length 0x400 00:24:23.311 Nvme10n1 : 0.95 201.12 12.57 0.00 0.00 260546.31 22524.97 287387.50 00:24:23.311 [2024-11-20T05:34:55.147Z] =================================================================================================================== 00:24:23.311 [2024-11-20T05:34:55.147Z] Total : 2309.08 144.32 0.00 0.00 249254.93 1711.22 287387.50 00:24:23.569 06:34:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2139394 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:24.501 rmmod nvme_tcp 00:24:24.501 rmmod nvme_fabrics 00:24:24.501 rmmod nvme_keyring 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 2139394 ']' 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 2139394 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # '[' -z 2139394 ']' 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # kill -0 2139394 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # uname 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2139394 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2139394' 00:24:24.501 killing process with pid 2139394 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@971 -- # kill 2139394 00:24:24.501 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@976 -- # wait 2139394 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:25.067 06:34:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:27.598 00:24:27.598 real 0m7.957s 00:24:27.598 user 0m24.793s 00:24:27.598 sys 0m1.463s 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:27.598 ************************************ 00:24:27.598 END TEST nvmf_shutdown_tc2 00:24:27.598 ************************************ 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:27.598 ************************************ 00:24:27.598 START TEST nvmf_shutdown_tc3 00:24:27.598 ************************************ 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc3 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:27.598 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:27.598 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:27.598 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:27.599 Found net devices under 0000:09:00.0: cvl_0_0 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:27.599 Found net devices under 0000:09:00.1: cvl_0_1 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:27.599 06:34:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:27.599 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.599 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:24:27.599 00:24:27.599 --- 10.0.0.2 ping statistics --- 00:24:27.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.599 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:27.599 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.599 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:24:27.599 00:24:27.599 --- 10.0.0.1 ping statistics --- 00:24:27.599 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.599 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=2140502 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 2140502 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2140502 ']' 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.599 [2024-11-20 06:34:59.132518] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:27.599 [2024-11-20 06:34:59.132591] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:27.599 [2024-11-20 06:34:59.203184] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:27.599 [2024-11-20 06:34:59.261157] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:27.599 [2024-11-20 06:34:59.261204] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:27.599 [2024-11-20 06:34:59.261228] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:27.599 [2024-11-20 06:34:59.261239] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:27.599 [2024-11-20 06:34:59.261248] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:27.599 [2024-11-20 06:34:59.262732] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:27.599 [2024-11-20 06:34:59.262762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:27.599 [2024-11-20 06:34:59.262820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:27.599 [2024-11-20 06:34:59.262823] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.599 [2024-11-20 06:34:59.414475] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:27.599 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:27.857 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:27.857 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:27.857 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:27.857 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:27.857 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:27.857 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:27.858 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:27.858 Malloc1 00:24:27.858 [2024-11-20 06:34:59.517933] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:27.858 Malloc2 00:24:27.858 Malloc3 00:24:27.858 Malloc4 00:24:28.116 Malloc5 00:24:28.116 Malloc6 00:24:28.116 Malloc7 00:24:28.116 Malloc8 00:24:28.116 Malloc9 00:24:28.116 Malloc10 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2140566 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2140566 /var/tmp/bdevperf.sock 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # '[' -z 2140566 ']' 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:28.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.374 06:34:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.374 { 00:24:28.374 "params": { 00:24:28.374 "name": "Nvme$subsystem", 00:24:28.374 "trtype": "$TEST_TRANSPORT", 00:24:28.374 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.374 "adrfam": "ipv4", 00:24:28.374 "trsvcid": "$NVMF_PORT", 00:24:28.374 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.374 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.374 "hdgst": ${hdgst:-false}, 00:24:28.374 "ddgst": ${ddgst:-false} 00:24:28.374 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 } 00:24:28.375 EOF 00:24:28.375 )") 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.375 { 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme$subsystem", 00:24:28.375 "trtype": "$TEST_TRANSPORT", 00:24:28.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "$NVMF_PORT", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.375 "hdgst": ${hdgst:-false}, 00:24:28.375 "ddgst": ${ddgst:-false} 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 } 00:24:28.375 EOF 00:24:28.375 )") 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.375 { 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme$subsystem", 00:24:28.375 "trtype": "$TEST_TRANSPORT", 00:24:28.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "$NVMF_PORT", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.375 "hdgst": ${hdgst:-false}, 00:24:28.375 "ddgst": ${ddgst:-false} 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 } 00:24:28.375 EOF 00:24:28.375 )") 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.375 { 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme$subsystem", 00:24:28.375 "trtype": "$TEST_TRANSPORT", 00:24:28.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "$NVMF_PORT", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.375 "hdgst": ${hdgst:-false}, 00:24:28.375 "ddgst": ${ddgst:-false} 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 } 00:24:28.375 EOF 00:24:28.375 )") 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.375 { 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme$subsystem", 00:24:28.375 "trtype": "$TEST_TRANSPORT", 00:24:28.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "$NVMF_PORT", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.375 "hdgst": ${hdgst:-false}, 00:24:28.375 "ddgst": ${ddgst:-false} 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 } 00:24:28.375 EOF 00:24:28.375 )") 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.375 { 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme$subsystem", 00:24:28.375 "trtype": "$TEST_TRANSPORT", 00:24:28.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "$NVMF_PORT", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.375 "hdgst": ${hdgst:-false}, 00:24:28.375 "ddgst": ${ddgst:-false} 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 } 00:24:28.375 EOF 00:24:28.375 )") 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.375 { 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme$subsystem", 00:24:28.375 "trtype": "$TEST_TRANSPORT", 00:24:28.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "$NVMF_PORT", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.375 "hdgst": ${hdgst:-false}, 00:24:28.375 "ddgst": ${ddgst:-false} 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 } 00:24:28.375 EOF 00:24:28.375 )") 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.375 { 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme$subsystem", 00:24:28.375 "trtype": "$TEST_TRANSPORT", 00:24:28.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "$NVMF_PORT", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.375 "hdgst": ${hdgst:-false}, 00:24:28.375 "ddgst": ${ddgst:-false} 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 } 00:24:28.375 EOF 00:24:28.375 )") 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.375 { 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme$subsystem", 00:24:28.375 "trtype": "$TEST_TRANSPORT", 00:24:28.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "$NVMF_PORT", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.375 "hdgst": ${hdgst:-false}, 00:24:28.375 "ddgst": ${ddgst:-false} 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 } 00:24:28.375 EOF 00:24:28.375 )") 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:24:28.375 { 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme$subsystem", 00:24:28.375 "trtype": "$TEST_TRANSPORT", 00:24:28.375 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "$NVMF_PORT", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.375 "hdgst": ${hdgst:-false}, 00:24:28.375 "ddgst": ${ddgst:-false} 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 } 00:24:28.375 EOF 00:24:28.375 )") 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:24:28.375 06:35:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme1", 00:24:28.375 "trtype": "tcp", 00:24:28.375 "traddr": "10.0.0.2", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "4420", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.375 "hdgst": false, 00:24:28.375 "ddgst": false 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 },{ 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme2", 00:24:28.375 "trtype": "tcp", 00:24:28.375 "traddr": "10.0.0.2", 00:24:28.375 "adrfam": "ipv4", 00:24:28.375 "trsvcid": "4420", 00:24:28.375 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:28.375 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:28.375 "hdgst": false, 00:24:28.375 "ddgst": false 00:24:28.375 }, 00:24:28.375 "method": "bdev_nvme_attach_controller" 00:24:28.375 },{ 00:24:28.375 "params": { 00:24:28.375 "name": "Nvme3", 00:24:28.375 "trtype": "tcp", 00:24:28.375 "traddr": "10.0.0.2", 00:24:28.376 "adrfam": "ipv4", 00:24:28.376 "trsvcid": "4420", 00:24:28.376 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:28.376 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:28.376 "hdgst": false, 00:24:28.376 "ddgst": false 00:24:28.376 }, 00:24:28.376 "method": "bdev_nvme_attach_controller" 00:24:28.376 },{ 00:24:28.376 "params": { 00:24:28.376 "name": "Nvme4", 00:24:28.376 "trtype": "tcp", 00:24:28.376 "traddr": "10.0.0.2", 00:24:28.376 "adrfam": "ipv4", 00:24:28.376 "trsvcid": "4420", 00:24:28.376 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:28.376 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:28.376 "hdgst": false, 00:24:28.376 "ddgst": false 00:24:28.376 }, 00:24:28.376 "method": "bdev_nvme_attach_controller" 00:24:28.376 },{ 00:24:28.376 "params": { 00:24:28.376 "name": "Nvme5", 00:24:28.376 "trtype": "tcp", 00:24:28.376 "traddr": "10.0.0.2", 00:24:28.376 "adrfam": "ipv4", 00:24:28.376 "trsvcid": "4420", 00:24:28.376 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:28.376 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:28.376 "hdgst": false, 00:24:28.376 "ddgst": false 00:24:28.376 }, 00:24:28.376 "method": "bdev_nvme_attach_controller" 00:24:28.376 },{ 00:24:28.376 "params": { 00:24:28.376 "name": "Nvme6", 00:24:28.376 "trtype": "tcp", 00:24:28.376 "traddr": "10.0.0.2", 00:24:28.376 "adrfam": "ipv4", 00:24:28.376 "trsvcid": "4420", 00:24:28.376 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:28.376 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:28.376 "hdgst": false, 00:24:28.376 "ddgst": false 00:24:28.376 }, 00:24:28.376 "method": "bdev_nvme_attach_controller" 00:24:28.376 },{ 00:24:28.376 "params": { 00:24:28.376 "name": "Nvme7", 00:24:28.376 "trtype": "tcp", 00:24:28.376 "traddr": "10.0.0.2", 00:24:28.376 "adrfam": "ipv4", 00:24:28.376 "trsvcid": "4420", 00:24:28.376 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:28.376 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:28.376 "hdgst": false, 00:24:28.376 "ddgst": false 00:24:28.376 }, 00:24:28.376 "method": "bdev_nvme_attach_controller" 00:24:28.376 },{ 00:24:28.376 "params": { 00:24:28.376 "name": "Nvme8", 00:24:28.376 "trtype": "tcp", 00:24:28.376 "traddr": "10.0.0.2", 00:24:28.376 "adrfam": "ipv4", 00:24:28.376 "trsvcid": "4420", 00:24:28.376 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:28.376 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:28.376 "hdgst": false, 00:24:28.376 "ddgst": false 00:24:28.376 }, 00:24:28.376 "method": "bdev_nvme_attach_controller" 00:24:28.376 },{ 00:24:28.376 "params": { 00:24:28.376 "name": "Nvme9", 00:24:28.376 "trtype": "tcp", 00:24:28.376 "traddr": "10.0.0.2", 00:24:28.376 "adrfam": "ipv4", 00:24:28.376 "trsvcid": "4420", 00:24:28.376 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:28.376 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:28.376 "hdgst": false, 00:24:28.376 "ddgst": false 00:24:28.376 }, 00:24:28.376 "method": "bdev_nvme_attach_controller" 00:24:28.376 },{ 00:24:28.376 "params": { 00:24:28.376 "name": "Nvme10", 00:24:28.376 "trtype": "tcp", 00:24:28.376 "traddr": "10.0.0.2", 00:24:28.376 "adrfam": "ipv4", 00:24:28.376 "trsvcid": "4420", 00:24:28.376 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:28.376 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:28.376 "hdgst": false, 00:24:28.376 "ddgst": false 00:24:28.376 }, 00:24:28.376 "method": "bdev_nvme_attach_controller" 00:24:28.376 }' 00:24:28.376 [2024-11-20 06:35:00.050752] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:28.376 [2024-11-20 06:35:00.050852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140566 ] 00:24:28.376 [2024-11-20 06:35:00.125745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:28.376 [2024-11-20 06:35:00.187747] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.273 Running I/O for 10 seconds... 00:24:30.273 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:30.273 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@866 -- # return 0 00:24:30.273 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:30.273 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:24:30.531 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:30.789 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:30.789 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:30.789 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:30.789 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:30.789 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:30.789 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:30.789 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:30.789 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=71 00:24:30.789 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 71 -ge 100 ']' 00:24:30.789 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=135 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 135 -ge 100 ']' 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2140502 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2140502 ']' 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2140502 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # uname 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2140502 00:24:31.062 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:31.063 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:31.063 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2140502' 00:24:31.063 killing process with pid 2140502 00:24:31.063 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@971 -- # kill 2140502 00:24:31.063 06:35:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@976 -- # wait 2140502 00:24:31.063 [2024-11-20 06:35:02.776839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3640 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.776930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3640 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.776956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3640 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.776969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3640 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778474] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778486] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.778992] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779087] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779116] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.779223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x257bf20 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.063 [2024-11-20 06:35:02.780881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.780893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.780906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.780918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.780931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.780943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.780962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.780975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.780988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781340] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781443] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.781568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a3b30 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.783486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.783530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.783548] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.783563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.783589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.783603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.783618] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.783637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.783664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbc620 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.783777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.783799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.783815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.783829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.783842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.783856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.783870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.783883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.783896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18546f0 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.783941] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.783961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.783976] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.783990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.784005] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.784018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.784032] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.064 [2024-11-20 06:35:02.784046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.064 [2024-11-20 06:35:02.784059] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18486e0 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.785382] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.064 [2024-11-20 06:35:02.785420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785480] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 [2024-11-20 06:35:02.785815] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 [2024-11-20 06:35:02.785850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 [2024-11-20 06:35:02.785876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 [2024-11-20 06:35:02.785889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 [2024-11-20 06:35:02.785915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 [2024-11-20 06:35:02.785928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 [2024-11-20 06:35:02.785941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 [2024-11-20 06:35:02.785954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:12[2024-11-20 06:35:02.785967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.785982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.785999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 [2024-11-20 06:35:02.786010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 [2024-11-20 06:35:02.786023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 [2024-11-20 06:35:02.786040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 [2024-11-20 06:35:02.786053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 [2024-11-20 06:35:02.786066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 [2024-11-20 06:35:02.786079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:1[2024-11-20 06:35:02.786092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with [2024-11-20 06:35:02.786109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:31.065 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 [2024-11-20 06:35:02.786123] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 [2024-11-20 06:35:02.786135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 [2024-11-20 06:35:02.786149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.065 [2024-11-20 06:35:02.786162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.065 [2024-11-20 06:35:02.786171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.065 [2024-11-20 06:35:02.786175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.786187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:1[2024-11-20 06:35:02.786188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4380 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.786204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.786981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.786995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.787011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.787024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.787043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1[2024-11-20 06:35:02.787036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.787066] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.787081] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.787094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.787119] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.787132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.787145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.787158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with [2024-11-20 06:35:02.787169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1the state(6) to be set 00:24:31.066 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.066 [2024-11-20 06:35:02.787184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.066 [2024-11-20 06:35:02.787197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.066 [2024-11-20 06:35:02.787203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787210] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1[2024-11-20 06:35:02.787267] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.787282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with [2024-11-20 06:35:02.787311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:1the state(6) to be set 00:24:31.067 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with [2024-11-20 06:35:02.787328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:31.067 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.787393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with [2024-11-20 06:35:02.787502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:1the state(6) to be set 00:24:31.067 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with [2024-11-20 06:35:02.787518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:31.067 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787531] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with [2024-11-20 06:35:02.787664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:1the state(6) to be set 00:24:31.067 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.067 [2024-11-20 06:35:02.787703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.067 [2024-11-20 06:35:02.787711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.067 [2024-11-20 06:35:02.787716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-11-20 06:35:02.787728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.787743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.787770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.787782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.787795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.787807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.787823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.787836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.787861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.787874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787886] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4850 is same with the state(6) to be set 00:24:31.068 [2024-11-20 06:35:02.787902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:31.068 [2024-11-20 06:35:02.788094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.068 [2024-11-20 06:35:02.788901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.068 [2024-11-20 06:35:02.788914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.788929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.788926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.788944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.788954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.788959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.788969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.788973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.788982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.788989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.788994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:1[2024-11-20 06:35:02.789051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with [2024-11-20 06:35:02.789064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:31.069 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789079] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:1[2024-11-20 06:35:02.789115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with [2024-11-20 06:35:02.789130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:31.069 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789144] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789169] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:1[2024-11-20 06:35:02.789181] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.789199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.789263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with [2024-11-20 06:35:02.789300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:31.069 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.789373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with [2024-11-20 06:35:02.789424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:1the state(6) to be set 00:24:31.069 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x25a4d20 is same with the state(6) to be set 00:24:31.069 [2024-11-20 06:35:02.789457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.069 [2024-11-20 06:35:02.789574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.069 [2024-11-20 06:35:02.789588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.789982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.789997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.790010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.790025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.790039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.790054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.070 [2024-11-20 06:35:02.790068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.070 [2024-11-20 06:35:02.790888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.790921] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.790936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.790948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.790961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.790973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.790986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.790998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791245] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791397] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791459] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791484] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.070 [2024-11-20 06:35:02.791532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.791720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2333c90 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792950] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.792998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793101] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793162] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793174] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793199] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793211] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793223] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:12[2024-11-20 06:35:02.793298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.071 the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.793344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.071 the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.071 [2024-11-20 06:35:02.793383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.793396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.071 the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.071 [2024-11-20 06:35:02.793423] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.071 [2024-11-20 06:35:02.793436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:12[2024-11-20 06:35:02.793449] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.071 the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.793462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.071 the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.071 [2024-11-20 06:35:02.793480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.071 [2024-11-20 06:35:02.793489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.793527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793631] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with the state(6) to be set 00:24:31.072 [2024-11-20 06:35:02.793673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334160 is same with [2024-11-20 06:35:02.793674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:1the state(6) to be set 00:24:31.072 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.793984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.793997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with [2024-11-20 06:35:02.794447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:1the state(6) to be set 00:24:31.072 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.072 [2024-11-20 06:35:02.794464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.072 [2024-11-20 06:35:02.794466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1[2024-11-20 06:35:02.794481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.794495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794535] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with [2024-11-20 06:35:02.794714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1the state(6) to be set 00:24:31.073 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with [2024-11-20 06:35:02.794728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:31.073 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128[2024-11-20 06:35:02.794865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.794880] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with [2024-11-20 06:35:02.794960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:1the state(6) to be set 00:24:31.073 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.794976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.794988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.794993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.795000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.795007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.795012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.795022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:1[2024-11-20 06:35:02.795024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.795037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with [2024-11-20 06:35:02.795037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:31.073 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.073 [2024-11-20 06:35:02.795051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.795055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.073 [2024-11-20 06:35:02.795063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.073 [2024-11-20 06:35:02.795069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.795075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.074 [2024-11-20 06:35:02.795088] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.795099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.074 [2024-11-20 06:35:02.795125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with [2024-11-20 06:35:02.795129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:31.074 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.795142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.074 [2024-11-20 06:35:02.795154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.795166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.074 [2024-11-20 06:35:02.795178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 06:35:02.795191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795204] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.074 [2024-11-20 06:35:02.795216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.795228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.074 [2024-11-20 06:35:02.795241] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.795259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.074 [2024-11-20 06:35:02.795272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.795294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:1[2024-11-20 06:35:02.795330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2334630 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.074 the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.795347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.795369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.074 [2024-11-20 06:35:02.795384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.795734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:31.074 [2024-11-20 06:35:02.795764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:31.074 [2024-11-20 06:35:02.795829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1850890 (9): Bad file descriptor 00:24:31.074 [2024-11-20 06:35:02.795861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18546f0 (9): Bad file descriptor 00:24:31.074 [2024-11-20 06:35:02.795883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbc620 (9): Bad file descriptor 00:24:31.074 [2024-11-20 06:35:02.795935] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.795965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.795984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.795998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796063] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7e150 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.796110] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796200] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796226] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bc110 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.796265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796352] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91020 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.796452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c98ab0 is same with the state(6) to be set 00:24:31.074 [2024-11-20 06:35:02.796623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.074 [2024-11-20 06:35:02.796659] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.074 [2024-11-20 06:35:02.796672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.796686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.075 [2024-11-20 06:35:02.796699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.796713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.075 [2024-11-20 06:35:02.796726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.796738] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74dd0 is same with the state(6) to be set 00:24:31.075 [2024-11-20 06:35:02.796791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.075 [2024-11-20 06:35:02.796812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.796827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.075 [2024-11-20 06:35:02.796841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.796854] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.075 [2024-11-20 06:35:02.796867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.796881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:31.075 [2024-11-20 06:35:02.796894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.796907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851be0 is same with the state(6) to be set 00:24:31.075 [2024-11-20 06:35:02.796937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18486e0 (9): Bad file descriptor 00:24:31.075 [2024-11-20 06:35:02.798614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:31.075 [2024-11-20 06:35:02.798651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bc110 (9): Bad file descriptor 00:24:31.075 [2024-11-20 06:35:02.799669] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:31.075 [2024-11-20 06:35:02.799807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.075 [2024-11-20 06:35:02.799836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18546f0 with addr=10.0.0.2, port=4420 00:24:31.075 [2024-11-20 06:35:02.799853] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18546f0 is same with the state(6) to be set 00:24:31.075 [2024-11-20 06:35:02.799944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.075 [2024-11-20 06:35:02.799969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1850890 with addr=10.0.0.2, port=4420 00:24:31.075 [2024-11-20 06:35:02.799984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850890 is same with the state(6) to be set 00:24:31.075 [2024-11-20 06:35:02.800075] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:31.075 [2024-11-20 06:35:02.800149] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:31.075 [2024-11-20 06:35:02.800221] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:31.075 [2024-11-20 06:35:02.800817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.075 [2024-11-20 06:35:02.800846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17bc110 with addr=10.0.0.2, port=4420 00:24:31.075 [2024-11-20 06:35:02.800862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bc110 is same with the state(6) to be set 00:24:31.075 [2024-11-20 06:35:02.800881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18546f0 (9): Bad file descriptor 00:24:31.075 [2024-11-20 06:35:02.800901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1850890 (9): Bad file descriptor 00:24:31.075 [2024-11-20 06:35:02.800990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.075 [2024-11-20 06:35:02.801770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.075 [2024-11-20 06:35:02.801786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.801800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.801817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.801831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.801851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.801866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.801882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.801896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.801912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.801926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.801942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.801956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.801972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.801986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.076 [2024-11-20 06:35:02.802857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.076 [2024-11-20 06:35:02.802873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.802887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.802903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.802917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.802932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.802946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.802962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.802977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.802993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.803011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.803026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a5a530 is same with the state(6) to be set 00:24:31.077 [2024-11-20 06:35:02.803219] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:31.077 [2024-11-20 06:35:02.803313] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:31.077 [2024-11-20 06:35:02.803422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bc110 (9): Bad file descriptor 00:24:31.077 [2024-11-20 06:35:02.803448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:31.077 [2024-11-20 06:35:02.803462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:31.077 [2024-11-20 06:35:02.803478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:31.077 [2024-11-20 06:35:02.803494] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:31.077 [2024-11-20 06:35:02.803510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:31.077 [2024-11-20 06:35:02.803523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:31.077 [2024-11-20 06:35:02.803536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:31.077 [2024-11-20 06:35:02.803547] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:31.077 [2024-11-20 06:35:02.804805] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:31.077 [2024-11-20 06:35:02.804844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:31.077 [2024-11-20 06:35:02.804876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851be0 (9): Bad file descriptor 00:24:31.077 [2024-11-20 06:35:02.804898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:31.077 [2024-11-20 06:35:02.804912] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:31.077 [2024-11-20 06:35:02.804925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:31.077 [2024-11-20 06:35:02.804939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:31.077 [2024-11-20 06:35:02.805459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.077 [2024-11-20 06:35:02.805487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1851be0 with addr=10.0.0.2, port=4420 00:24:31.077 [2024-11-20 06:35:02.805504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851be0 is same with the state(6) to be set 00:24:31.077 [2024-11-20 06:35:02.805567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851be0 (9): Bad file descriptor 00:24:31.077 [2024-11-20 06:35:02.805646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:31.077 [2024-11-20 06:35:02.805665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:31.077 [2024-11-20 06:35:02.805679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:31.077 [2024-11-20 06:35:02.805692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:31.077 [2024-11-20 06:35:02.805786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7e150 (9): Bad file descriptor 00:24:31.077 [2024-11-20 06:35:02.805832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c91020 (9): Bad file descriptor 00:24:31.077 [2024-11-20 06:35:02.805865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c98ab0 (9): Bad file descriptor 00:24:31.077 [2024-11-20 06:35:02.805897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c74dd0 (9): Bad file descriptor 00:24:31.077 [2024-11-20 06:35:02.806035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.077 [2024-11-20 06:35:02.806685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.077 [2024-11-20 06:35:02.806699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.806715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.806728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.806744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.806758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.806774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.806788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.806808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.806823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.806840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.806854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.806871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.806884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.806900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.806914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.806930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.806944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.806960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.806973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.806989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.807869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.807885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.078 [2024-11-20 06:35:02.813639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.078 [2024-11-20 06:35:02.813696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.813711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.813729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.813751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.813768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.813783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.813799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.813813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.813828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d42970 is same with the state(6) to be set 00:24:31.079 [2024-11-20 06:35:02.815234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.815975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.815991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.079 [2024-11-20 06:35:02.816326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.079 [2024-11-20 06:35:02.816343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.816980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.816995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.817009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.817026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.817040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.817055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.817069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.817085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.817101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.817117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.817134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.817150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.817165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.817181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.817196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.817212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.817227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.817242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.817257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.817271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a947c0 is same with the state(6) to be set 00:24:31.080 [2024-11-20 06:35:02.818503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:31.080 [2024-11-20 06:35:02.818534] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:31.080 [2024-11-20 06:35:02.818973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.080 [2024-11-20 06:35:02.819005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18486e0 with addr=10.0.0.2, port=4420 00:24:31.080 [2024-11-20 06:35:02.819023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18486e0 is same with the state(6) to be set 00:24:31.080 [2024-11-20 06:35:02.819120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.080 [2024-11-20 06:35:02.819146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbc620 with addr=10.0.0.2, port=4420 00:24:31.080 [2024-11-20 06:35:02.819162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbc620 is same with the state(6) to be set 00:24:31.080 [2024-11-20 06:35:02.819518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.819544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.080 [2024-11-20 06:35:02.819565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.080 [2024-11-20 06:35:02.819592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.819983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.819998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.081 [2024-11-20 06:35:02.820797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.081 [2024-11-20 06:35:02.820814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.820828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.820844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.820857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.820877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.820893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.820910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.820924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.820941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.820955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.820970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.820984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.821543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.821558] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c557a0 is same with the state(6) to be set 00:24:31.082 [2024-11-20 06:35:02.822820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.822844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.822864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.822880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.822896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.822915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.822933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.822948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.822963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.822977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.822993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.823007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.823024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.823038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.823054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.823068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.823084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.823097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.823114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.823128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.823144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.823158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.823174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.823188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.823203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.823217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.823233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.823247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.082 [2024-11-20 06:35:02.823263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.082 [2024-11-20 06:35:02.823277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.823979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.823993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.083 [2024-11-20 06:35:02.824490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.083 [2024-11-20 06:35:02.824506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.824520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.824536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.824550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.824565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.824579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.824595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.824609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.824625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.824639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.824655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.824669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.824684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.824698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.824715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.824730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.824746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.824759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.824776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.824790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.824804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c58220 is same with the state(6) to be set 00:24:31.084 [2024-11-20 06:35:02.826035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.084 [2024-11-20 06:35:02.826939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.084 [2024-11-20 06:35:02.826955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.826969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.826986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.827979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.827996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.085 [2024-11-20 06:35:02.828014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.085 [2024-11-20 06:35:02.828029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c597c0 is same with the state(6) to be set 00:24:31.086 [2024-11-20 06:35:02.829258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.829984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.829998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.086 [2024-11-20 06:35:02.830450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.086 [2024-11-20 06:35:02.830464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.830976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.830992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.831006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.831021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.831035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.831051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.831064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.831080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.831094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.831110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.831124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.831140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.831158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.831175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.831189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.831205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:31.087 [2024-11-20 06:35:02.831220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:31.087 [2024-11-20 06:35:02.831235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c5ac30 is same with the state(6) to be set 00:24:31.087 [2024-11-20 06:35:02.833146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:31.087 [2024-11-20 06:35:02.833186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:31.087 [2024-11-20 06:35:02.833206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:31.087 [2024-11-20 06:35:02.833225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:31.087 [2024-11-20 06:35:02.833243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:31.087 [2024-11-20 06:35:02.833260] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:31.087 [2024-11-20 06:35:02.833353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18486e0 (9): Bad file descriptor 00:24:31.087 [2024-11-20 06:35:02.833381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbc620 (9): Bad file descriptor 00:24:31.087 [2024-11-20 06:35:02.833444] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:24:31.087 [2024-11-20 06:35:02.833469] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:24:31.087 [2024-11-20 06:35:02.833489] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:24:31.087 [2024-11-20 06:35:02.833508] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:24:31.087 [2024-11-20 06:35:02.833603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:31.087 task offset: 25088 on job bdev=Nvme1n1 fails 00:24:31.087 00:24:31.087 Latency(us) 00:24:31.087 [2024-11-20T05:35:02.923Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:31.087 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.087 Job: Nvme1n1 ended in about 0.90 seconds with error 00:24:31.087 Verification LBA range: start 0x0 length 0x400 00:24:31.087 Nvme1n1 : 0.90 218.60 13.66 71.38 0.00 218195.39 5679.79 253211.69 00:24:31.087 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.087 Job: Nvme2n1 ended in about 0.92 seconds with error 00:24:31.087 Verification LBA range: start 0x0 length 0x400 00:24:31.087 Nvme2n1 : 0.92 143.52 8.97 69.58 0.00 291068.51 20194.80 259425.47 00:24:31.087 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.087 Job: Nvme3n1 ended in about 0.90 seconds with error 00:24:31.087 Verification LBA range: start 0x0 length 0x400 00:24:31.087 Nvme3n1 : 0.90 213.86 13.37 71.29 0.00 212672.57 9709.04 253211.69 00:24:31.087 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.087 Job: Nvme4n1 ended in about 0.91 seconds with error 00:24:31.087 Verification LBA range: start 0x0 length 0x400 00:24:31.087 Nvme4n1 : 0.91 215.51 13.47 70.37 0.00 207825.39 10679.94 243891.01 00:24:31.087 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.087 Job: Nvme5n1 ended in about 0.93 seconds with error 00:24:31.087 Verification LBA range: start 0x0 length 0x400 00:24:31.087 Nvme5n1 : 0.93 138.02 8.63 69.01 0.00 281433.63 19709.35 273406.48 00:24:31.087 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.087 Job: Nvme6n1 ended in about 0.90 seconds with error 00:24:31.087 Verification LBA range: start 0x0 length 0x400 00:24:31.087 Nvme6n1 : 0.90 212.60 13.29 70.87 0.00 200313.55 14272.28 239230.67 00:24:31.088 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.088 Job: Nvme7n1 ended in about 0.93 seconds with error 00:24:31.088 Verification LBA range: start 0x0 length 0x400 00:24:31.088 Nvme7n1 : 0.93 141.84 8.86 68.77 0.00 264899.68 20874.43 250104.79 00:24:31.088 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.088 Job: Nvme8n1 ended in about 0.93 seconds with error 00:24:31.088 Verification LBA range: start 0x0 length 0x400 00:24:31.088 Nvme8n1 : 0.93 137.06 8.57 68.53 0.00 265352.53 17864.63 240784.12 00:24:31.088 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.088 Job: Nvme9n1 ended in about 0.94 seconds with error 00:24:31.088 Verification LBA range: start 0x0 length 0x400 00:24:31.088 Nvme9n1 : 0.94 136.60 8.54 68.30 0.00 260685.62 20097.71 260978.92 00:24:31.088 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:31.088 Job: Nvme10n1 ended in about 0.92 seconds with error 00:24:31.088 Verification LBA range: start 0x0 length 0x400 00:24:31.088 Nvme10n1 : 0.92 138.65 8.67 69.33 0.00 250305.80 21554.06 287387.50 00:24:31.088 [2024-11-20T05:35:02.924Z] =================================================================================================================== 00:24:31.088 [2024-11-20T05:35:02.924Z] Total : 1696.25 106.02 697.42 0.00 241128.15 5679.79 287387.50 00:24:31.088 [2024-11-20 06:35:02.859517] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:31.088 [2024-11-20 06:35:02.859613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:31.088 [2024-11-20 06:35:02.859901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.088 [2024-11-20 06:35:02.859939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1850890 with addr=10.0.0.2, port=4420 00:24:31.088 [2024-11-20 06:35:02.859958] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1850890 is same with the state(6) to be set 00:24:31.088 [2024-11-20 06:35:02.860061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.088 [2024-11-20 06:35:02.860091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18546f0 with addr=10.0.0.2, port=4420 00:24:31.088 [2024-11-20 06:35:02.860108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18546f0 is same with the state(6) to be set 00:24:31.088 [2024-11-20 06:35:02.860215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.088 [2024-11-20 06:35:02.860241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x17bc110 with addr=10.0.0.2, port=4420 00:24:31.088 [2024-11-20 06:35:02.860258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x17bc110 is same with the state(6) to be set 00:24:31.088 [2024-11-20 06:35:02.860368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.088 [2024-11-20 06:35:02.860397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1851be0 with addr=10.0.0.2, port=4420 00:24:31.088 [2024-11-20 06:35:02.860413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851be0 is same with the state(6) to be set 00:24:31.088 [2024-11-20 06:35:02.860508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.088 [2024-11-20 06:35:02.860535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c7e150 with addr=10.0.0.2, port=4420 00:24:31.088 [2024-11-20 06:35:02.860552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c7e150 is same with the state(6) to be set 00:24:31.088 [2024-11-20 06:35:02.860635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.088 [2024-11-20 06:35:02.860672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c91020 with addr=10.0.0.2, port=4420 00:24:31.088 [2024-11-20 06:35:02.860688] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c91020 is same with the state(6) to be set 00:24:31.088 [2024-11-20 06:35:02.860706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:31.088 [2024-11-20 06:35:02.860719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:31.088 [2024-11-20 06:35:02.860736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:31.088 [2024-11-20 06:35:02.860755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:31.088 [2024-11-20 06:35:02.860773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:31.088 [2024-11-20 06:35:02.860786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:31.088 [2024-11-20 06:35:02.860799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:31.088 [2024-11-20 06:35:02.860811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:31.088 [2024-11-20 06:35:02.862149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.088 [2024-11-20 06:35:02.862180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c74dd0 with addr=10.0.0.2, port=4420 00:24:31.088 [2024-11-20 06:35:02.862206] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c74dd0 is same with the state(6) to be set 00:24:31.088 [2024-11-20 06:35:02.862287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.088 [2024-11-20 06:35:02.862326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c98ab0 with addr=10.0.0.2, port=4420 00:24:31.088 [2024-11-20 06:35:02.862342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c98ab0 is same with the state(6) to be set 00:24:31.088 [2024-11-20 06:35:02.862367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1850890 (9): Bad file descriptor 00:24:31.088 [2024-11-20 06:35:02.862390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18546f0 (9): Bad file descriptor 00:24:31.088 [2024-11-20 06:35:02.862408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x17bc110 (9): Bad file descriptor 00:24:31.088 [2024-11-20 06:35:02.862426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851be0 (9): Bad file descriptor 00:24:31.088 [2024-11-20 06:35:02.862444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c7e150 (9): Bad file descriptor 00:24:31.088 [2024-11-20 06:35:02.862461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c91020 (9): Bad file descriptor 00:24:31.088 [2024-11-20 06:35:02.862531] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:24:31.088 [2024-11-20 06:35:02.862559] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:24:31.088 [2024-11-20 06:35:02.862578] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:24:31.088 [2024-11-20 06:35:02.862613] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:24:31.088 [2024-11-20 06:35:02.862634] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:24:31.088 [2024-11-20 06:35:02.862655] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:24:31.088 [2024-11-20 06:35:02.862990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c74dd0 (9): Bad file descriptor 00:24:31.088 [2024-11-20 06:35:02.863021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c98ab0 (9): Bad file descriptor 00:24:31.088 [2024-11-20 06:35:02.863040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:31.088 [2024-11-20 06:35:02.863053] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:31.088 [2024-11-20 06:35:02.863068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:31.088 [2024-11-20 06:35:02.863081] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:31.088 [2024-11-20 06:35:02.863096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:31.088 [2024-11-20 06:35:02.863109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:31.088 [2024-11-20 06:35:02.863122] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:31.088 [2024-11-20 06:35:02.863134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:31.088 [2024-11-20 06:35:02.863148] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:31.088 [2024-11-20 06:35:02.863161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:31.088 [2024-11-20 06:35:02.863174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:31.088 [2024-11-20 06:35:02.863185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:31.088 [2024-11-20 06:35:02.863199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:31.088 [2024-11-20 06:35:02.863211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:31.088 [2024-11-20 06:35:02.863224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:31.088 [2024-11-20 06:35:02.863236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:31.088 [2024-11-20 06:35:02.863250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:31.088 [2024-11-20 06:35:02.863262] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:31.088 [2024-11-20 06:35:02.863275] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:31.088 [2024-11-20 06:35:02.863299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:31.088 [2024-11-20 06:35:02.863323] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:31.088 [2024-11-20 06:35:02.863336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:31.088 [2024-11-20 06:35:02.863349] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:31.088 [2024-11-20 06:35:02.863367] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:31.088 [2024-11-20 06:35:02.863451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:31.088 [2024-11-20 06:35:02.863477] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:31.088 [2024-11-20 06:35:02.863511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:31.088 [2024-11-20 06:35:02.863528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:31.089 [2024-11-20 06:35:02.863542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:31.089 [2024-11-20 06:35:02.863554] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:31.089 [2024-11-20 06:35:02.863569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:31.089 [2024-11-20 06:35:02.863582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:31.089 [2024-11-20 06:35:02.863598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:31.089 [2024-11-20 06:35:02.863610] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:31.089 [2024-11-20 06:35:02.863733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.089 [2024-11-20 06:35:02.863761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cbc620 with addr=10.0.0.2, port=4420 00:24:31.089 [2024-11-20 06:35:02.863778] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cbc620 is same with the state(6) to be set 00:24:31.089 [2024-11-20 06:35:02.863870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:31.089 [2024-11-20 06:35:02.863895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18486e0 with addr=10.0.0.2, port=4420 00:24:31.089 [2024-11-20 06:35:02.863911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18486e0 is same with the state(6) to be set 00:24:31.089 [2024-11-20 06:35:02.863958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cbc620 (9): Bad file descriptor 00:24:31.089 [2024-11-20 06:35:02.863983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18486e0 (9): Bad file descriptor 00:24:31.089 [2024-11-20 06:35:02.864021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:31.089 [2024-11-20 06:35:02.864039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:31.089 [2024-11-20 06:35:02.864053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:31.089 [2024-11-20 06:35:02.864067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:31.089 [2024-11-20 06:35:02.864082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:31.089 [2024-11-20 06:35:02.864094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:31.089 [2024-11-20 06:35:02.864107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:31.089 [2024-11-20 06:35:02.864119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:31.654 06:35:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2140566 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@650 -- # local es=0 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2140566 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@638 -- # local arg=wait 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # type -t wait 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # wait 2140566 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@653 -- # es=255 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@662 -- # es=127 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # case "$es" in 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@670 -- # es=1 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:32.591 rmmod nvme_tcp 00:24:32.591 rmmod nvme_fabrics 00:24:32.591 rmmod nvme_keyring 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 2140502 ']' 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 2140502 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # '[' -z 2140502 ']' 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # kill -0 2140502 00:24:32.591 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2140502) - No such process 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2140502 is not found' 00:24:32.591 Process with pid 2140502 is not found 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:32.591 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:24:32.849 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:32.849 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:32.850 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:32.850 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:32.850 06:35:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.758 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:34.758 00:24:34.758 real 0m7.566s 00:24:34.758 user 0m18.827s 00:24:34.758 sys 0m1.469s 00:24:34.758 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:34.758 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:34.758 ************************************ 00:24:34.758 END TEST nvmf_shutdown_tc3 00:24:34.758 ************************************ 00:24:34.758 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:34.758 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:34.758 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:34.758 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:24:34.758 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:34.758 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:34.758 ************************************ 00:24:34.758 START TEST nvmf_shutdown_tc4 00:24:34.758 ************************************ 00:24:34.758 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1127 -- # nvmf_shutdown_tc4 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:34.759 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:34.759 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:34.759 Found net devices under 0000:09:00.0: cvl_0_0 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:34.759 Found net devices under 0000:09:00.1: cvl_0_1 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:34.759 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:34.760 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:34.760 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:34.760 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:34.760 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:34.760 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:34.760 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:34.760 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:35.019 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:35.019 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:24:35.019 00:24:35.019 --- 10.0.0.2 ping statistics --- 00:24:35.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.019 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:35.019 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:35.019 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:24:35.019 00:24:35.019 --- 10.0.0.1 ping statistics --- 00:24:35.019 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:35.019 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=2141472 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 2141472 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@833 -- # '[' -z 2141472 ']' 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:35.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:35.019 06:35:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:35.019 [2024-11-20 06:35:06.764528] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:35.019 [2024-11-20 06:35:06.764644] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:35.019 [2024-11-20 06:35:06.837685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:35.278 [2024-11-20 06:35:06.898310] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:35.278 [2024-11-20 06:35:06.898356] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:35.278 [2024-11-20 06:35:06.898381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:35.278 [2024-11-20 06:35:06.898393] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:35.278 [2024-11-20 06:35:06.898403] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:35.278 [2024-11-20 06:35:06.899934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:35.278 [2024-11-20 06:35:06.899990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:35.278 [2024-11-20 06:35:06.900060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:35.278 [2024-11-20 06:35:06.900063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@866 -- # return 0 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:35.278 [2024-11-20 06:35:07.053065] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:35.278 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:35.536 Malloc1 00:24:35.536 [2024-11-20 06:35:07.152246] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:35.536 Malloc2 00:24:35.536 Malloc3 00:24:35.536 Malloc4 00:24:35.536 Malloc5 00:24:35.795 Malloc6 00:24:35.795 Malloc7 00:24:35.795 Malloc8 00:24:35.795 Malloc9 00:24:35.795 Malloc10 00:24:35.795 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:35.795 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:35.795 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:35.795 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:35.795 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2141652 00:24:35.795 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:35.795 06:35:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:36.052 [2024-11-20 06:35:07.680370] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:41.322 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:41.322 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2141472 00:24:41.322 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2141472 ']' 00:24:41.322 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2141472 00:24:41.323 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # uname 00:24:41.323 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:41.323 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2141472 00:24:41.323 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:41.323 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:41.323 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2141472' 00:24:41.323 killing process with pid 2141472 00:24:41.323 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@971 -- # kill 2141472 00:24:41.323 06:35:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@976 -- # wait 2141472 00:24:41.323 [2024-11-20 06:35:12.672428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8c2e40 is same with the state(6) to be set 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 [2024-11-20 06:35:12.680778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 [2024-11-20 06:35:12.681851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.323 starting I/O failed: -6 00:24:41.323 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 [2024-11-20 06:35:12.683171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 [2024-11-20 06:35:12.685040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:41.324 NVMe io qpair process completion error 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 [2024-11-20 06:35:12.685942] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960670 is same with the state(6) to be set 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 [2024-11-20 06:35:12.685984] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960670 is same with the state(6) to be set 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 [2024-11-20 06:35:12.685999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960670 is same with the state(6) to be set 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 [2024-11-20 06:35:12.686012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960670 is same with the state(6) to be set 00:24:41.324 [2024-11-20 06:35:12.686024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960670 is same with the state(6) to be set 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 [2024-11-20 06:35:12.686036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x960670 is same with the state(6) to be set 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 [2024-11-20 06:35:12.686348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.324 starting I/O failed: -6 00:24:41.324 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 [2024-11-20 06:35:12.687131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977d50 is same with tWrite completed with error (sct=0, sc=8) 00:24:41.325 he state(6) to be set 00:24:41.325 starting I/O failed: -6 00:24:41.325 [2024-11-20 06:35:12.687167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977d50 is same with tWrite completed with error (sct=0, sc=8) 00:24:41.325 he state(6) to be set 00:24:41.325 [2024-11-20 06:35:12.687184] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977d50 is same with the state(6) to be set 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 [2024-11-20 06:35:12.687196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977d50 is same with the state(6) to be set 00:24:41.325 [2024-11-20 06:35:12.687208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x977d50 is same with tWrite completed with error (sct=0, sc=8) 00:24:41.325 he state(6) to be set 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 [2024-11-20 06:35:12.687401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 [2024-11-20 06:35:12.688782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.325 starting I/O failed: -6 00:24:41.325 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 [2024-11-20 06:35:12.690776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:41.326 NVMe io qpair process completion error 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 [2024-11-20 06:35:12.692005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 [2024-11-20 06:35:12.693080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 Write completed with error (sct=0, sc=8) 00:24:41.326 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 [2024-11-20 06:35:12.694300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 [2024-11-20 06:35:12.696273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:41.327 NVMe io qpair process completion error 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 Write completed with error (sct=0, sc=8) 00:24:41.327 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 [2024-11-20 06:35:12.697588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.328 starting I/O failed: -6 00:24:41.328 starting I/O failed: -6 00:24:41.328 starting I/O failed: -6 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 [2024-11-20 06:35:12.698800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 [2024-11-20 06:35:12.699958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.328 Write completed with error (sct=0, sc=8) 00:24:41.328 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 [2024-11-20 06:35:12.701944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:41.329 NVMe io qpair process completion error 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 [2024-11-20 06:35:12.703223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 [2024-11-20 06:35:12.704266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.329 starting I/O failed: -6 00:24:41.329 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 [2024-11-20 06:35:12.705472] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 [2024-11-20 06:35:12.707246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.330 NVMe io qpair process completion error 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 starting I/O failed: -6 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.330 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 [2024-11-20 06:35:12.708753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:41.331 starting I/O failed: -6 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 [2024-11-20 06:35:12.709865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 [2024-11-20 06:35:12.711042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.331 Write completed with error (sct=0, sc=8) 00:24:41.331 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 [2024-11-20 06:35:12.714907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:41.332 NVMe io qpair process completion error 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 [2024-11-20 06:35:12.716198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.332 starting I/O failed: -6 00:24:41.332 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 [2024-11-20 06:35:12.717284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 [2024-11-20 06:35:12.718462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.333 Write completed with error (sct=0, sc=8) 00:24:41.333 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 [2024-11-20 06:35:12.721817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:41.334 NVMe io qpair process completion error 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.334 starting I/O failed: -6 00:24:41.334 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 [2024-11-20 06:35:12.727926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 [2024-11-20 06:35:12.728924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.335 starting I/O failed: -6 00:24:41.335 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 [2024-11-20 06:35:12.730120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 [2024-11-20 06:35:12.731842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.336 NVMe io qpair process completion error 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.336 starting I/O failed: -6 00:24:41.336 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 [2024-11-20 06:35:12.733136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:41.337 starting I/O failed: -6 00:24:41.337 starting I/O failed: -6 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 [2024-11-20 06:35:12.734295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 [2024-11-20 06:35:12.735455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.337 starting I/O failed: -6 00:24:41.337 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 Write completed with error (sct=0, sc=8) 00:24:41.338 starting I/O failed: -6 00:24:41.338 [2024-11-20 06:35:12.738247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:41.338 NVMe io qpair process completion error 00:24:41.338 Initializing NVMe Controllers 00:24:41.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:41.338 Controller IO queue size 128, less than required. 00:24:41.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:41.338 Controller IO queue size 128, less than required. 00:24:41.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:41.338 Controller IO queue size 128, less than required. 00:24:41.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:41.338 Controller IO queue size 128, less than required. 00:24:41.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:41.338 Controller IO queue size 128, less than required. 00:24:41.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:41.338 Controller IO queue size 128, less than required. 00:24:41.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:41.338 Controller IO queue size 128, less than required. 00:24:41.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:41.338 Controller IO queue size 128, less than required. 00:24:41.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:41.338 Controller IO queue size 128, less than required. 00:24:41.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.338 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:41.338 Controller IO queue size 128, less than required. 00:24:41.338 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:41.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:41.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:41.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:41.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:41.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:41.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:41.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:41.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:41.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:41.338 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:41.338 Initialization complete. Launching workers. 00:24:41.338 ======================================================== 00:24:41.338 Latency(us) 00:24:41.338 Device Information : IOPS MiB/s Average min max 00:24:41.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 1787.00 76.79 71649.98 1017.37 125945.84 00:24:41.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 1861.27 79.98 68812.99 783.48 125486.10 00:24:41.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 1882.07 80.87 67244.78 1006.10 123829.60 00:24:41.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 1864.67 80.12 67899.53 860.03 120448.98 00:24:41.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 1837.50 78.96 68928.37 906.52 120741.04 00:24:41.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 1874.64 80.55 67590.29 863.47 115674.28 00:24:41.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 1781.69 76.56 71143.06 949.02 117593.93 00:24:41.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 1766.20 75.89 71794.54 873.26 127489.92 00:24:41.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 1756.87 75.49 72231.51 1015.16 131825.25 00:24:41.338 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1775.33 76.28 71512.60 821.05 117023.04 00:24:41.338 ======================================================== 00:24:41.338 Total : 18187.24 781.48 69832.98 783.48 131825.25 00:24:41.338 00:24:41.338 [2024-11-20 06:35:12.744352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee95f0 is same with the state(6) to be set 00:24:41.338 [2024-11-20 06:35:12.744448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee92c0 is same with the state(6) to be set 00:24:41.339 [2024-11-20 06:35:12.744508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea900 is same with the state(6) to be set 00:24:41.339 [2024-11-20 06:35:12.744571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee8d10 is same with the state(6) to be set 00:24:41.339 [2024-11-20 06:35:12.744628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee89e0 is same with the state(6) to be set 00:24:41.339 [2024-11-20 06:35:12.744682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee86b0 is same with the state(6) to be set 00:24:41.339 [2024-11-20 06:35:12.744740] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eeaae0 is same with the state(6) to be set 00:24:41.339 [2024-11-20 06:35:12.744809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9c50 is same with the state(6) to be set 00:24:41.339 [2024-11-20 06:35:12.744869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9920 is same with the state(6) to be set 00:24:41.339 [2024-11-20 06:35:12.744926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1eea720 is same with the state(6) to be set 00:24:41.339 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:41.597 06:35:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2141652 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@650 -- # local es=0 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2141652 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@638 -- # local arg=wait 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # type -t wait 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # wait 2141652 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@653 -- # es=1 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:42.532 rmmod nvme_tcp 00:24:42.532 rmmod nvme_fabrics 00:24:42.532 rmmod nvme_keyring 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 2141472 ']' 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 2141472 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@952 -- # '[' -z 2141472 ']' 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@956 -- # kill -0 2141472 00:24:42.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2141472) - No such process 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@979 -- # echo 'Process with pid 2141472 is not found' 00:24:42.532 Process with pid 2141472 is not found 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:42.532 06:35:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:45.068 00:24:45.068 real 0m9.765s 00:24:45.068 user 0m23.961s 00:24:45.068 sys 0m5.620s 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:45.068 ************************************ 00:24:45.068 END TEST nvmf_shutdown_tc4 00:24:45.068 ************************************ 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:45.068 00:24:45.068 real 0m37.433s 00:24:45.068 user 1m41.090s 00:24:45.068 sys 0m12.133s 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:45.068 ************************************ 00:24:45.068 END TEST nvmf_shutdown 00:24:45.068 ************************************ 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:45.068 ************************************ 00:24:45.068 START TEST nvmf_nsid 00:24:45.068 ************************************ 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:45.068 * Looking for test storage... 00:24:45.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lcov --version 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.068 --rc genhtml_branch_coverage=1 00:24:45.068 --rc genhtml_function_coverage=1 00:24:45.068 --rc genhtml_legend=1 00:24:45.068 --rc geninfo_all_blocks=1 00:24:45.068 --rc geninfo_unexecuted_blocks=1 00:24:45.068 00:24:45.068 ' 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.068 --rc genhtml_branch_coverage=1 00:24:45.068 --rc genhtml_function_coverage=1 00:24:45.068 --rc genhtml_legend=1 00:24:45.068 --rc geninfo_all_blocks=1 00:24:45.068 --rc geninfo_unexecuted_blocks=1 00:24:45.068 00:24:45.068 ' 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.068 --rc genhtml_branch_coverage=1 00:24:45.068 --rc genhtml_function_coverage=1 00:24:45.068 --rc genhtml_legend=1 00:24:45.068 --rc geninfo_all_blocks=1 00:24:45.068 --rc geninfo_unexecuted_blocks=1 00:24:45.068 00:24:45.068 ' 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:45.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:45.068 --rc genhtml_branch_coverage=1 00:24:45.068 --rc genhtml_function_coverage=1 00:24:45.068 --rc genhtml_legend=1 00:24:45.068 --rc geninfo_all_blocks=1 00:24:45.068 --rc geninfo_unexecuted_blocks=1 00:24:45.068 00:24:45.068 ' 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:45.068 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:45.069 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:24:45.069 06:35:16 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:46.969 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.969 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:24:46.969 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:46.969 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:46.969 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:46.969 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:46.970 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:46.970 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:46.970 Found net devices under 0000:09:00.0: cvl_0_0 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:46.970 Found net devices under 0000:09:00.1: cvl_0_1 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.970 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:46.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:24:46.971 00:24:46.971 --- 10.0.0.2 ping statistics --- 00:24:46.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.971 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:24:46.971 00:24:46.971 --- 10.0.0.1 ping statistics --- 00:24:46.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.971 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=2144386 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 2144386 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2144386 ']' 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:46.971 06:35:18 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:47.322 [2024-11-20 06:35:18.835890] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:47.322 [2024-11-20 06:35:18.835974] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:47.322 [2024-11-20 06:35:18.905788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.322 [2024-11-20 06:35:18.963582] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:47.322 [2024-11-20 06:35:18.963649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:47.322 [2024-11-20 06:35:18.963662] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:47.322 [2024-11-20 06:35:18.963673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:47.322 [2024-11-20 06:35:18.963682] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:47.322 [2024-11-20 06:35:18.964239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2144419 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:47.322 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:47.323 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=77e77e0b-405d-4c44-9e59-13f853840c25 00:24:47.323 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:47.323 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3c5788ea-25c6-47ce-b4de-c03e67faedf4 00:24:47.323 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:47.323 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c828d7e4-65f3-48d0-8f25-aa0bc25842da 00:24:47.323 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:47.323 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:47.323 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:47.597 null0 00:24:47.597 null1 00:24:47.597 null2 00:24:47.597 [2024-11-20 06:35:19.151834] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:47.597 [2024-11-20 06:35:19.163960] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:47.597 [2024-11-20 06:35:19.164031] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2144419 ] 00:24:47.597 [2024-11-20 06:35:19.176037] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:47.597 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:47.597 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2144419 /var/tmp/tgt2.sock 00:24:47.597 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@833 -- # '[' -z 2144419 ']' 00:24:47.597 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:47.597 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:47.597 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:47.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:47.597 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:47.597 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:47.597 [2024-11-20 06:35:19.229755] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:47.598 [2024-11-20 06:35:19.287655] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:47.855 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:47.855 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@866 -- # return 0 00:24:47.855 06:35:19 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:48.421 [2024-11-20 06:35:19.990061] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:48.421 [2024-11-20 06:35:20.006489] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:48.421 nvme0n1 nvme0n2 00:24:48.421 nvme1n1 00:24:48.421 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:48.421 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:48.421 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # '[' 0 -lt 15 ']' 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # i=1 00:24:48.987 06:35:20 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # sleep 1 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n1 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n1 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 77e77e0b-405d-4c44-9e59-13f853840c25 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=77e77e0b405d4c449e5913f853840c25 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 77E77E0B405D4C449E5913F853840C25 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 77E77E0B405D4C449E5913F853840C25 == \7\7\E\7\7\E\0\B\4\0\5\D\4\C\4\4\9\E\5\9\1\3\F\8\5\3\8\4\0\C\2\5 ]] 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n2 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n2 00:24:49.919 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3c5788ea-25c6-47ce-b4de-c03e67faedf4 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3c5788ea25c647ceb4dec03e67faedf4 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3C5788EA25C647CEB4DEC03E67FAEDF4 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3C5788EA25C647CEB4DEC03E67FAEDF4 == \3\C\5\7\8\8\E\A\2\5\C\6\4\7\C\E\B\4\D\E\C\0\3\E\6\7\F\A\E\D\F\4 ]] 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1237 -- # local i=0 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # lsblk -l -o NAME 00:24:49.920 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1238 -- # grep -q -w nvme0n3 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # lsblk -l -o NAME 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1244 -- # grep -q -w nvme0n3 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1248 -- # return 0 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c828d7e4-65f3-48d0-8f25-aa0bc25842da 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c828d7e465f348d08f25aa0bc25842da 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C828D7E465F348D08F25AA0BC25842DA 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C828D7E465F348D08F25AA0BC25842DA == \C\8\2\8\D\7\E\4\6\5\F\3\4\8\D\0\8\F\2\5\A\A\0\B\C\2\5\8\4\2\D\A ]] 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2144419 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2144419 ']' 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2144419 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:50.178 06:35:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2144419 00:24:50.178 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:50.178 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:50.178 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2144419' 00:24:50.178 killing process with pid 2144419 00:24:50.178 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2144419 00:24:50.178 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2144419 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:50.744 rmmod nvme_tcp 00:24:50.744 rmmod nvme_fabrics 00:24:50.744 rmmod nvme_keyring 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 2144386 ']' 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 2144386 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@952 -- # '[' -z 2144386 ']' 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@956 -- # kill -0 2144386 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # uname 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2144386 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2144386' 00:24:50.744 killing process with pid 2144386 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@971 -- # kill 2144386 00:24:50.744 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@976 -- # wait 2144386 00:24:51.002 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:51.002 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:51.002 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:51.002 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:24:51.003 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:24:51.003 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:51.003 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:24:51.003 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:51.003 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:51.003 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:51.003 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:51.003 06:35:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.537 06:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:53.537 00:24:53.537 real 0m8.401s 00:24:53.537 user 0m8.322s 00:24:53.537 sys 0m2.633s 00:24:53.537 06:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:53.537 06:35:24 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:53.537 ************************************ 00:24:53.537 END TEST nvmf_nsid 00:24:53.537 ************************************ 00:24:53.537 06:35:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:53.537 00:24:53.537 real 11m42.660s 00:24:53.537 user 27m40.032s 00:24:53.537 sys 2m48.420s 00:24:53.537 06:35:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1128 -- # xtrace_disable 00:24:53.537 06:35:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:53.537 ************************************ 00:24:53.537 END TEST nvmf_target_extra 00:24:53.537 ************************************ 00:24:53.537 06:35:24 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:53.537 06:35:24 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:53.537 06:35:24 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:53.537 06:35:24 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:53.537 ************************************ 00:24:53.537 START TEST nvmf_host 00:24:53.537 ************************************ 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:53.537 * Looking for test storage... 00:24:53.537 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lcov --version 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.537 06:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:53.537 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.537 --rc genhtml_branch_coverage=1 00:24:53.537 --rc genhtml_function_coverage=1 00:24:53.537 --rc genhtml_legend=1 00:24:53.537 --rc geninfo_all_blocks=1 00:24:53.538 --rc geninfo_unexecuted_blocks=1 00:24:53.538 00:24:53.538 ' 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:53.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.538 --rc genhtml_branch_coverage=1 00:24:53.538 --rc genhtml_function_coverage=1 00:24:53.538 --rc genhtml_legend=1 00:24:53.538 --rc geninfo_all_blocks=1 00:24:53.538 --rc geninfo_unexecuted_blocks=1 00:24:53.538 00:24:53.538 ' 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:53.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.538 --rc genhtml_branch_coverage=1 00:24:53.538 --rc genhtml_function_coverage=1 00:24:53.538 --rc genhtml_legend=1 00:24:53.538 --rc geninfo_all_blocks=1 00:24:53.538 --rc geninfo_unexecuted_blocks=1 00:24:53.538 00:24:53.538 ' 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:53.538 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.538 --rc genhtml_branch_coverage=1 00:24:53.538 --rc genhtml_function_coverage=1 00:24:53.538 --rc genhtml_legend=1 00:24:53.538 --rc geninfo_all_blocks=1 00:24:53.538 --rc geninfo_unexecuted_blocks=1 00:24:53.538 00:24:53.538 ' 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.538 06:35:24 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.538 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.538 ************************************ 00:24:53.538 START TEST nvmf_multicontroller 00:24:53.538 ************************************ 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:53.538 * Looking for test storage... 00:24:53.538 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lcov --version 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:53.538 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:24:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.539 --rc genhtml_branch_coverage=1 00:24:53.539 --rc genhtml_function_coverage=1 00:24:53.539 --rc genhtml_legend=1 00:24:53.539 --rc geninfo_all_blocks=1 00:24:53.539 --rc geninfo_unexecuted_blocks=1 00:24:53.539 00:24:53.539 ' 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:24:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.539 --rc genhtml_branch_coverage=1 00:24:53.539 --rc genhtml_function_coverage=1 00:24:53.539 --rc genhtml_legend=1 00:24:53.539 --rc geninfo_all_blocks=1 00:24:53.539 --rc geninfo_unexecuted_blocks=1 00:24:53.539 00:24:53.539 ' 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:24:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.539 --rc genhtml_branch_coverage=1 00:24:53.539 --rc genhtml_function_coverage=1 00:24:53.539 --rc genhtml_legend=1 00:24:53.539 --rc geninfo_all_blocks=1 00:24:53.539 --rc geninfo_unexecuted_blocks=1 00:24:53.539 00:24:53.539 ' 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:24:53.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:53.539 --rc genhtml_branch_coverage=1 00:24:53.539 --rc genhtml_function_coverage=1 00:24:53.539 --rc genhtml_legend=1 00:24:53.539 --rc geninfo_all_blocks=1 00:24:53.539 --rc geninfo_unexecuted_blocks=1 00:24:53.539 00:24:53.539 ' 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:53.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:53.539 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:24:53.540 06:35:25 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:24:55.445 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:24:55.446 Found 0000:09:00.0 (0x8086 - 0x159b) 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.446 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:24:55.705 Found 0000:09:00.1 (0x8086 - 0x159b) 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:24:55.705 Found net devices under 0000:09:00.0: cvl_0_0 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:24:55.705 Found net devices under 0000:09:00.1: cvl_0_1 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:55.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:55.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:24:55.705 00:24:55.705 --- 10.0.0.2 ping statistics --- 00:24:55.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.705 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:55.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:55.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:24:55.705 00:24:55.705 --- 10.0.0.1 ping statistics --- 00:24:55.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:55.705 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=2146899 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 2146899 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2146899 ']' 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:55.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:55.705 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:55.705 [2024-11-20 06:35:27.518044] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:55.705 [2024-11-20 06:35:27.518135] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:55.964 [2024-11-20 06:35:27.594700] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:55.964 [2024-11-20 06:35:27.657209] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:55.964 [2024-11-20 06:35:27.657258] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:55.964 [2024-11-20 06:35:27.657287] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:55.964 [2024-11-20 06:35:27.657300] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:55.964 [2024-11-20 06:35:27.657318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:55.964 [2024-11-20 06:35:27.658895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:55.964 [2024-11-20 06:35:27.658957] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:55.964 [2024-11-20 06:35:27.658961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:55.964 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:55.964 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:24:55.964 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:55.964 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:55.964 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 [2024-11-20 06:35:27.821422] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 Malloc0 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 [2024-11-20 06:35:27.885948] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 [2024-11-20 06:35:27.893835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 Malloc1 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2146997 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2146997 /var/tmp/bdevperf.sock 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # '[' -z 2146997 ']' 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # local max_retries=100 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:56.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # xtrace_disable 00:24:56.224 06:35:27 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.483 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:24:56.483 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@866 -- # return 0 00:24:56.483 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:56.483 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.483 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.483 NVMe0n1 00:24:56.483 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.741 1 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.741 request: 00:24:56.741 { 00:24:56.741 "name": "NVMe0", 00:24:56.741 "trtype": "tcp", 00:24:56.741 "traddr": "10.0.0.2", 00:24:56.741 "adrfam": "ipv4", 00:24:56.741 "trsvcid": "4420", 00:24:56.741 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.741 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:56.741 "hostaddr": "10.0.0.1", 00:24:56.741 "prchk_reftag": false, 00:24:56.741 "prchk_guard": false, 00:24:56.741 "hdgst": false, 00:24:56.741 "ddgst": false, 00:24:56.741 "allow_unrecognized_csi": false, 00:24:56.741 "method": "bdev_nvme_attach_controller", 00:24:56.741 "req_id": 1 00:24:56.741 } 00:24:56.741 Got JSON-RPC error response 00:24:56.741 response: 00:24:56.741 { 00:24:56.741 "code": -114, 00:24:56.741 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:56.741 } 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:56.741 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.742 request: 00:24:56.742 { 00:24:56.742 "name": "NVMe0", 00:24:56.742 "trtype": "tcp", 00:24:56.742 "traddr": "10.0.0.2", 00:24:56.742 "adrfam": "ipv4", 00:24:56.742 "trsvcid": "4420", 00:24:56.742 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:56.742 "hostaddr": "10.0.0.1", 00:24:56.742 "prchk_reftag": false, 00:24:56.742 "prchk_guard": false, 00:24:56.742 "hdgst": false, 00:24:56.742 "ddgst": false, 00:24:56.742 "allow_unrecognized_csi": false, 00:24:56.742 "method": "bdev_nvme_attach_controller", 00:24:56.742 "req_id": 1 00:24:56.742 } 00:24:56.742 Got JSON-RPC error response 00:24:56.742 response: 00:24:56.742 { 00:24:56.742 "code": -114, 00:24:56.742 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:56.742 } 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.742 request: 00:24:56.742 { 00:24:56.742 "name": "NVMe0", 00:24:56.742 "trtype": "tcp", 00:24:56.742 "traddr": "10.0.0.2", 00:24:56.742 "adrfam": "ipv4", 00:24:56.742 "trsvcid": "4420", 00:24:56.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.742 "hostaddr": "10.0.0.1", 00:24:56.742 "prchk_reftag": false, 00:24:56.742 "prchk_guard": false, 00:24:56.742 "hdgst": false, 00:24:56.742 "ddgst": false, 00:24:56.742 "multipath": "disable", 00:24:56.742 "allow_unrecognized_csi": false, 00:24:56.742 "method": "bdev_nvme_attach_controller", 00:24:56.742 "req_id": 1 00:24:56.742 } 00:24:56.742 Got JSON-RPC error response 00:24:56.742 response: 00:24:56.742 { 00:24:56.742 "code": -114, 00:24:56.742 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:56.742 } 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:56.742 request: 00:24:56.742 { 00:24:56.742 "name": "NVMe0", 00:24:56.742 "trtype": "tcp", 00:24:56.742 "traddr": "10.0.0.2", 00:24:56.742 "adrfam": "ipv4", 00:24:56.742 "trsvcid": "4420", 00:24:56.742 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:56.742 "hostaddr": "10.0.0.1", 00:24:56.742 "prchk_reftag": false, 00:24:56.742 "prchk_guard": false, 00:24:56.742 "hdgst": false, 00:24:56.742 "ddgst": false, 00:24:56.742 "multipath": "failover", 00:24:56.742 "allow_unrecognized_csi": false, 00:24:56.742 "method": "bdev_nvme_attach_controller", 00:24:56.742 "req_id": 1 00:24:56.742 } 00:24:56.742 Got JSON-RPC error response 00:24:56.742 response: 00:24:56.742 { 00:24:56.742 "code": -114, 00:24:56.742 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:56.742 } 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:56.742 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.743 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.000 NVMe0n1 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.000 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:57.000 06:35:28 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:58.373 { 00:24:58.373 "results": [ 00:24:58.373 { 00:24:58.373 "job": "NVMe0n1", 00:24:58.373 "core_mask": "0x1", 00:24:58.373 "workload": "write", 00:24:58.373 "status": "finished", 00:24:58.373 "queue_depth": 128, 00:24:58.373 "io_size": 4096, 00:24:58.373 "runtime": 1.007453, 00:24:58.373 "iops": 17146.20930207166, 00:24:58.373 "mibps": 66.97738008621742, 00:24:58.373 "io_failed": 0, 00:24:58.373 "io_timeout": 0, 00:24:58.373 "avg_latency_us": 7452.832529813592, 00:24:58.373 "min_latency_us": 6505.054814814815, 00:24:58.373 "max_latency_us": 14757.736296296296 00:24:58.373 } 00:24:58.373 ], 00:24:58.373 "core_count": 1 00:24:58.373 } 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 2146997 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2146997 ']' 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2146997 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2146997 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2146997' 00:24:58.373 killing process with pid 2146997 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2146997 00:24:58.373 06:35:29 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2146997 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:58.373 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1596 -- # sort -u 00:24:58.631 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # cat 00:24:58.631 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:58.631 [2024-11-20 06:35:28.002587] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:24:58.631 [2024-11-20 06:35:28.002686] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2146997 ] 00:24:58.631 [2024-11-20 06:35:28.070703] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:58.631 [2024-11-20 06:35:28.130264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:58.631 [2024-11-20 06:35:28.727139] bdev.c:4756:bdev_name_add: *ERROR*: Bdev name d82bb354-1a3c-4ee4-a730-3040417b6646 already exists 00:24:58.631 [2024-11-20 06:35:28.727180] bdev.c:7965:bdev_register: *ERROR*: Unable to add uuid:d82bb354-1a3c-4ee4-a730-3040417b6646 alias for bdev NVMe1n1 00:24:58.631 [2024-11-20 06:35:28.727210] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:58.631 Running I/O for 1 seconds... 00:24:58.631 17146.00 IOPS, 66.98 MiB/s 00:24:58.631 Latency(us) 00:24:58.631 [2024-11-20T05:35:30.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.631 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:58.631 NVMe0n1 : 1.01 17146.21 66.98 0.00 0.00 7452.83 6505.05 14757.74 00:24:58.631 [2024-11-20T05:35:30.467Z] =================================================================================================================== 00:24:58.631 [2024-11-20T05:35:30.467Z] Total : 17146.21 66.98 0.00 0.00 7452.83 6505.05 14757.74 00:24:58.631 Received shutdown signal, test time was about 1.000000 seconds 00:24:58.631 00:24:58.631 Latency(us) 00:24:58.631 [2024-11-20T05:35:30.467Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:58.631 [2024-11-20T05:35:30.467Z] =================================================================================================================== 00:24:58.631 [2024-11-20T05:35:30.467Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:58.631 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:58.631 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1603 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:58.631 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1597 -- # read -r file 00:24:58.631 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:58.631 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:58.632 rmmod nvme_tcp 00:24:58.632 rmmod nvme_fabrics 00:24:58.632 rmmod nvme_keyring 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 2146899 ']' 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 2146899 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # '[' -z 2146899 ']' 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # kill -0 2146899 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # uname 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2146899 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2146899' 00:24:58.632 killing process with pid 2146899 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@971 -- # kill 2146899 00:24:58.632 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@976 -- # wait 2146899 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:58.890 06:35:30 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:01.422 00:25:01.422 real 0m7.600s 00:25:01.422 user 0m11.831s 00:25:01.422 sys 0m2.423s 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:25:01.422 ************************************ 00:25:01.422 END TEST nvmf_multicontroller 00:25:01.422 ************************************ 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.422 ************************************ 00:25:01.422 START TEST nvmf_aer 00:25:01.422 ************************************ 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:25:01.422 * Looking for test storage... 00:25:01.422 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lcov --version 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:01.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.422 --rc genhtml_branch_coverage=1 00:25:01.422 --rc genhtml_function_coverage=1 00:25:01.422 --rc genhtml_legend=1 00:25:01.422 --rc geninfo_all_blocks=1 00:25:01.422 --rc geninfo_unexecuted_blocks=1 00:25:01.422 00:25:01.422 ' 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:01.422 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.422 --rc genhtml_branch_coverage=1 00:25:01.422 --rc genhtml_function_coverage=1 00:25:01.422 --rc genhtml_legend=1 00:25:01.422 --rc geninfo_all_blocks=1 00:25:01.422 --rc geninfo_unexecuted_blocks=1 00:25:01.422 00:25:01.422 ' 00:25:01.422 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:01.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.423 --rc genhtml_branch_coverage=1 00:25:01.423 --rc genhtml_function_coverage=1 00:25:01.423 --rc genhtml_legend=1 00:25:01.423 --rc geninfo_all_blocks=1 00:25:01.423 --rc geninfo_unexecuted_blocks=1 00:25:01.423 00:25:01.423 ' 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:01.423 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:01.423 --rc genhtml_branch_coverage=1 00:25:01.423 --rc genhtml_function_coverage=1 00:25:01.423 --rc genhtml_legend=1 00:25:01.423 --rc geninfo_all_blocks=1 00:25:01.423 --rc geninfo_unexecuted_blocks=1 00:25:01.423 00:25:01.423 ' 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:01.423 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:25:01.423 06:35:32 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:03.325 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:03.325 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:03.326 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:03.326 Found net devices under 0000:09:00.0: cvl_0_0 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:03.326 Found net devices under 0000:09:00.1: cvl_0_1 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:03.326 06:35:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:03.326 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:03.326 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:25:03.326 00:25:03.326 --- 10.0.0.2 ping statistics --- 00:25:03.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.326 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:03.326 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:03.326 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:25:03.326 00:25:03.326 --- 10.0.0.1 ping statistics --- 00:25:03.326 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:03.326 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=2149218 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 2149218 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # '[' -z 2149218 ']' 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:03.326 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:03.326 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.585 [2024-11-20 06:35:35.203024] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:25:03.585 [2024-11-20 06:35:35.203120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:03.585 [2024-11-20 06:35:35.275762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:03.585 [2024-11-20 06:35:35.330755] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:03.585 [2024-11-20 06:35:35.330810] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:03.585 [2024-11-20 06:35:35.330837] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:03.585 [2024-11-20 06:35:35.330848] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:03.585 [2024-11-20 06:35:35.330857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:03.585 [2024-11-20 06:35:35.332519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:03.585 [2024-11-20 06:35:35.332573] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.585 [2024-11-20 06:35:35.332634] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:03.585 [2024-11-20 06:35:35.332638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@866 -- # return 0 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.844 [2024-11-20 06:35:35.479939] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.844 Malloc0 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.844 [2024-11-20 06:35:35.558227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:03.844 [ 00:25:03.844 { 00:25:03.844 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:03.844 "subtype": "Discovery", 00:25:03.844 "listen_addresses": [], 00:25:03.844 "allow_any_host": true, 00:25:03.844 "hosts": [] 00:25:03.844 }, 00:25:03.844 { 00:25:03.844 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:03.844 "subtype": "NVMe", 00:25:03.844 "listen_addresses": [ 00:25:03.844 { 00:25:03.844 "trtype": "TCP", 00:25:03.844 "adrfam": "IPv4", 00:25:03.844 "traddr": "10.0.0.2", 00:25:03.844 "trsvcid": "4420" 00:25:03.844 } 00:25:03.844 ], 00:25:03.844 "allow_any_host": true, 00:25:03.844 "hosts": [], 00:25:03.844 "serial_number": "SPDK00000000000001", 00:25:03.844 "model_number": "SPDK bdev Controller", 00:25:03.844 "max_namespaces": 2, 00:25:03.844 "min_cntlid": 1, 00:25:03.844 "max_cntlid": 65519, 00:25:03.844 "namespaces": [ 00:25:03.844 { 00:25:03.844 "nsid": 1, 00:25:03.844 "bdev_name": "Malloc0", 00:25:03.844 "name": "Malloc0", 00:25:03.844 "nguid": "F357A28A244A48A6B6E2122F4D7E898F", 00:25:03.844 "uuid": "f357a28a-244a-48a6-b6e2-122f4d7e898f" 00:25:03.844 } 00:25:03.844 ] 00:25:03.844 } 00:25:03.844 ] 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2149365 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # local i=0 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 0 -lt 200 ']' 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=1 00:25:03.844 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 1 -lt 200 ']' 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=2 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # '[' 2 -lt 200 ']' 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # i=3 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # sleep 0.1 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1274 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1278 -- # return 0 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:04.102 Malloc1 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.102 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:04.360 [ 00:25:04.360 { 00:25:04.360 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:04.360 "subtype": "Discovery", 00:25:04.360 "listen_addresses": [], 00:25:04.360 "allow_any_host": true, 00:25:04.360 "hosts": [] 00:25:04.360 }, 00:25:04.360 { 00:25:04.360 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:04.360 "subtype": "NVMe", 00:25:04.360 "listen_addresses": [ 00:25:04.360 { 00:25:04.360 "trtype": "TCP", 00:25:04.360 "adrfam": "IPv4", 00:25:04.360 "traddr": "10.0.0.2", 00:25:04.360 "trsvcid": "4420" 00:25:04.360 } 00:25:04.360 ], 00:25:04.360 "allow_any_host": true, 00:25:04.360 "hosts": [], 00:25:04.360 "serial_number": "SPDK00000000000001", 00:25:04.360 "model_number": "SPDK bdev Controller", 00:25:04.360 "max_namespaces": 2, 00:25:04.360 "min_cntlid": 1, 00:25:04.360 "max_cntlid": 65519, 00:25:04.360 "namespaces": [ 00:25:04.360 { 00:25:04.360 "nsid": 1, 00:25:04.360 "bdev_name": "Malloc0", 00:25:04.360 "name": "Malloc0", 00:25:04.360 "nguid": "F357A28A244A48A6B6E2122F4D7E898F", 00:25:04.360 "uuid": "f357a28a-244a-48a6-b6e2-122f4d7e898f" 00:25:04.360 }, 00:25:04.360 { 00:25:04.360 "nsid": 2, 00:25:04.360 "bdev_name": "Malloc1", 00:25:04.360 "name": "Malloc1", 00:25:04.360 "nguid": "F1F9D3ED2AB04A8C81D687DCC49CE84C", 00:25:04.360 "uuid": "f1f9d3ed-2ab0-4a8c-81d6-87dcc49ce84c" 00:25:04.360 } 00:25:04.360 ] 00:25:04.360 } 00:25:04.360 ] 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2149365 00:25:04.360 Asynchronous Event Request test 00:25:04.360 Attaching to 10.0.0.2 00:25:04.360 Attached to 10.0.0.2 00:25:04.360 Registering asynchronous event callbacks... 00:25:04.360 Starting namespace attribute notice tests for all controllers... 00:25:04.360 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:25:04.360 aer_cb - Changed Namespace 00:25:04.360 Cleaning up... 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.360 06:35:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:04.360 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:04.360 rmmod nvme_tcp 00:25:04.360 rmmod nvme_fabrics 00:25:04.360 rmmod nvme_keyring 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 2149218 ']' 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 2149218 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # '[' -z 2149218 ']' 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # kill -0 2149218 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # uname 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2149218 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2149218' 00:25:04.361 killing process with pid 2149218 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@971 -- # kill 2149218 00:25:04.361 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@976 -- # wait 2149218 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:04.619 06:35:36 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:07.149 00:25:07.149 real 0m5.694s 00:25:07.149 user 0m4.729s 00:25:07.149 sys 0m2.066s 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:25:07.149 ************************************ 00:25:07.149 END TEST nvmf_aer 00:25:07.149 ************************************ 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.149 ************************************ 00:25:07.149 START TEST nvmf_async_init 00:25:07.149 ************************************ 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:25:07.149 * Looking for test storage... 00:25:07.149 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lcov --version 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.149 --rc genhtml_branch_coverage=1 00:25:07.149 --rc genhtml_function_coverage=1 00:25:07.149 --rc genhtml_legend=1 00:25:07.149 --rc geninfo_all_blocks=1 00:25:07.149 --rc geninfo_unexecuted_blocks=1 00:25:07.149 00:25:07.149 ' 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.149 --rc genhtml_branch_coverage=1 00:25:07.149 --rc genhtml_function_coverage=1 00:25:07.149 --rc genhtml_legend=1 00:25:07.149 --rc geninfo_all_blocks=1 00:25:07.149 --rc geninfo_unexecuted_blocks=1 00:25:07.149 00:25:07.149 ' 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.149 --rc genhtml_branch_coverage=1 00:25:07.149 --rc genhtml_function_coverage=1 00:25:07.149 --rc genhtml_legend=1 00:25:07.149 --rc geninfo_all_blocks=1 00:25:07.149 --rc geninfo_unexecuted_blocks=1 00:25:07.149 00:25:07.149 ' 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:07.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.149 --rc genhtml_branch_coverage=1 00:25:07.149 --rc genhtml_function_coverage=1 00:25:07.149 --rc genhtml_legend=1 00:25:07.149 --rc geninfo_all_blocks=1 00:25:07.149 --rc geninfo_unexecuted_blocks=1 00:25:07.149 00:25:07.149 ' 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.149 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:07.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=529d4a7a7c2648a7af1ccc7a13f4f868 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:25:07.150 06:35:38 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.051 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:09.051 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:25:09.051 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:09.052 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:09.052 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:09.052 Found net devices under 0000:09:00.0: cvl_0_0 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:09.052 Found net devices under 0000:09:00.1: cvl_0_1 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:09.052 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:09.310 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:09.310 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:09.310 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:09.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:09.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:25:09.311 00:25:09.311 --- 10.0.0.2 ping statistics --- 00:25:09.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.311 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:09.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:09.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:25:09.311 00:25:09.311 --- 10.0.0.1 ping statistics --- 00:25:09.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:09.311 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=2151317 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 2151317 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # '[' -z 2151317 ']' 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:09.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:09.311 06:35:40 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.311 [2024-11-20 06:35:40.994364] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:25:09.311 [2024-11-20 06:35:40.994443] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:09.311 [2024-11-20 06:35:41.065263] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.311 [2024-11-20 06:35:41.121494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:09.311 [2024-11-20 06:35:41.121547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:09.311 [2024-11-20 06:35:41.121577] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:09.311 [2024-11-20 06:35:41.121589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:09.311 [2024-11-20 06:35:41.121599] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:09.311 [2024-11-20 06:35:41.122194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@866 -- # return 0 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.569 [2024-11-20 06:35:41.264528] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.569 null0 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 529d4a7a7c2648a7af1ccc7a13f4f868 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.569 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.570 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.570 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:09.570 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.570 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.570 [2024-11-20 06:35:41.304799] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:09.570 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.570 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:09.570 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.570 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.828 nvme0n1 00:25:09.828 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.828 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:09.828 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.828 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.828 [ 00:25:09.828 { 00:25:09.828 "name": "nvme0n1", 00:25:09.828 "aliases": [ 00:25:09.828 "529d4a7a-7c26-48a7-af1c-cc7a13f4f868" 00:25:09.828 ], 00:25:09.828 "product_name": "NVMe disk", 00:25:09.828 "block_size": 512, 00:25:09.828 "num_blocks": 2097152, 00:25:09.828 "uuid": "529d4a7a-7c26-48a7-af1c-cc7a13f4f868", 00:25:09.828 "numa_id": 0, 00:25:09.828 "assigned_rate_limits": { 00:25:09.828 "rw_ios_per_sec": 0, 00:25:09.828 "rw_mbytes_per_sec": 0, 00:25:09.828 "r_mbytes_per_sec": 0, 00:25:09.828 "w_mbytes_per_sec": 0 00:25:09.828 }, 00:25:09.828 "claimed": false, 00:25:09.828 "zoned": false, 00:25:09.828 "supported_io_types": { 00:25:09.828 "read": true, 00:25:09.828 "write": true, 00:25:09.828 "unmap": false, 00:25:09.828 "flush": true, 00:25:09.828 "reset": true, 00:25:09.828 "nvme_admin": true, 00:25:09.828 "nvme_io": true, 00:25:09.828 "nvme_io_md": false, 00:25:09.828 "write_zeroes": true, 00:25:09.828 "zcopy": false, 00:25:09.828 "get_zone_info": false, 00:25:09.828 "zone_management": false, 00:25:09.828 "zone_append": false, 00:25:09.828 "compare": true, 00:25:09.828 "compare_and_write": true, 00:25:09.828 "abort": true, 00:25:09.828 "seek_hole": false, 00:25:09.828 "seek_data": false, 00:25:09.828 "copy": true, 00:25:09.828 "nvme_iov_md": false 00:25:09.828 }, 00:25:09.828 "memory_domains": [ 00:25:09.828 { 00:25:09.828 "dma_device_id": "system", 00:25:09.828 "dma_device_type": 1 00:25:09.828 } 00:25:09.828 ], 00:25:09.828 "driver_specific": { 00:25:09.828 "nvme": [ 00:25:09.828 { 00:25:09.828 "trid": { 00:25:09.828 "trtype": "TCP", 00:25:09.828 "adrfam": "IPv4", 00:25:09.828 "traddr": "10.0.0.2", 00:25:09.828 "trsvcid": "4420", 00:25:09.828 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:09.828 }, 00:25:09.828 "ctrlr_data": { 00:25:09.828 "cntlid": 1, 00:25:09.828 "vendor_id": "0x8086", 00:25:09.828 "model_number": "SPDK bdev Controller", 00:25:09.828 "serial_number": "00000000000000000000", 00:25:09.828 "firmware_revision": "25.01", 00:25:09.828 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:09.828 "oacs": { 00:25:09.828 "security": 0, 00:25:09.828 "format": 0, 00:25:09.828 "firmware": 0, 00:25:09.828 "ns_manage": 0 00:25:09.828 }, 00:25:09.828 "multi_ctrlr": true, 00:25:09.828 "ana_reporting": false 00:25:09.828 }, 00:25:09.828 "vs": { 00:25:09.828 "nvme_version": "1.3" 00:25:09.828 }, 00:25:09.828 "ns_data": { 00:25:09.828 "id": 1, 00:25:09.828 "can_share": true 00:25:09.828 } 00:25:09.828 } 00:25:09.828 ], 00:25:09.828 "mp_policy": "active_passive" 00:25:09.828 } 00:25:09.828 } 00:25:09.828 ] 00:25:09.828 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:09.828 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:09.828 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:09.828 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:09.828 [2024-11-20 06:35:41.553344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:09.828 [2024-11-20 06:35:41.553446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe90b20 (9): Bad file descriptor 00:25:10.086 [2024-11-20 06:35:41.685425] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:10.086 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.086 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:10.086 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.086 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.086 [ 00:25:10.086 { 00:25:10.086 "name": "nvme0n1", 00:25:10.086 "aliases": [ 00:25:10.086 "529d4a7a-7c26-48a7-af1c-cc7a13f4f868" 00:25:10.086 ], 00:25:10.086 "product_name": "NVMe disk", 00:25:10.086 "block_size": 512, 00:25:10.086 "num_blocks": 2097152, 00:25:10.086 "uuid": "529d4a7a-7c26-48a7-af1c-cc7a13f4f868", 00:25:10.086 "numa_id": 0, 00:25:10.086 "assigned_rate_limits": { 00:25:10.086 "rw_ios_per_sec": 0, 00:25:10.086 "rw_mbytes_per_sec": 0, 00:25:10.086 "r_mbytes_per_sec": 0, 00:25:10.086 "w_mbytes_per_sec": 0 00:25:10.086 }, 00:25:10.086 "claimed": false, 00:25:10.086 "zoned": false, 00:25:10.086 "supported_io_types": { 00:25:10.086 "read": true, 00:25:10.086 "write": true, 00:25:10.086 "unmap": false, 00:25:10.086 "flush": true, 00:25:10.086 "reset": true, 00:25:10.086 "nvme_admin": true, 00:25:10.086 "nvme_io": true, 00:25:10.086 "nvme_io_md": false, 00:25:10.086 "write_zeroes": true, 00:25:10.086 "zcopy": false, 00:25:10.086 "get_zone_info": false, 00:25:10.086 "zone_management": false, 00:25:10.086 "zone_append": false, 00:25:10.086 "compare": true, 00:25:10.086 "compare_and_write": true, 00:25:10.086 "abort": true, 00:25:10.086 "seek_hole": false, 00:25:10.086 "seek_data": false, 00:25:10.086 "copy": true, 00:25:10.086 "nvme_iov_md": false 00:25:10.086 }, 00:25:10.086 "memory_domains": [ 00:25:10.086 { 00:25:10.086 "dma_device_id": "system", 00:25:10.086 "dma_device_type": 1 00:25:10.086 } 00:25:10.086 ], 00:25:10.086 "driver_specific": { 00:25:10.086 "nvme": [ 00:25:10.086 { 00:25:10.086 "trid": { 00:25:10.086 "trtype": "TCP", 00:25:10.086 "adrfam": "IPv4", 00:25:10.086 "traddr": "10.0.0.2", 00:25:10.086 "trsvcid": "4420", 00:25:10.086 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:10.086 }, 00:25:10.086 "ctrlr_data": { 00:25:10.086 "cntlid": 2, 00:25:10.086 "vendor_id": "0x8086", 00:25:10.086 "model_number": "SPDK bdev Controller", 00:25:10.086 "serial_number": "00000000000000000000", 00:25:10.086 "firmware_revision": "25.01", 00:25:10.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:10.086 "oacs": { 00:25:10.086 "security": 0, 00:25:10.086 "format": 0, 00:25:10.086 "firmware": 0, 00:25:10.086 "ns_manage": 0 00:25:10.086 }, 00:25:10.086 "multi_ctrlr": true, 00:25:10.086 "ana_reporting": false 00:25:10.086 }, 00:25:10.086 "vs": { 00:25:10.086 "nvme_version": "1.3" 00:25:10.086 }, 00:25:10.086 "ns_data": { 00:25:10.086 "id": 1, 00:25:10.086 "can_share": true 00:25:10.086 } 00:25:10.086 } 00:25:10.086 ], 00:25:10.086 "mp_policy": "active_passive" 00:25:10.086 } 00:25:10.086 } 00:25:10.086 ] 00:25:10.086 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.UHRGmHsxwA 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.UHRGmHsxwA 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.UHRGmHsxwA 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.087 [2024-11-20 06:35:41.737910] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:10.087 [2024-11-20 06:35:41.738035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.087 [2024-11-20 06:35:41.753951] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.087 nvme0n1 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.087 [ 00:25:10.087 { 00:25:10.087 "name": "nvme0n1", 00:25:10.087 "aliases": [ 00:25:10.087 "529d4a7a-7c26-48a7-af1c-cc7a13f4f868" 00:25:10.087 ], 00:25:10.087 "product_name": "NVMe disk", 00:25:10.087 "block_size": 512, 00:25:10.087 "num_blocks": 2097152, 00:25:10.087 "uuid": "529d4a7a-7c26-48a7-af1c-cc7a13f4f868", 00:25:10.087 "numa_id": 0, 00:25:10.087 "assigned_rate_limits": { 00:25:10.087 "rw_ios_per_sec": 0, 00:25:10.087 "rw_mbytes_per_sec": 0, 00:25:10.087 "r_mbytes_per_sec": 0, 00:25:10.087 "w_mbytes_per_sec": 0 00:25:10.087 }, 00:25:10.087 "claimed": false, 00:25:10.087 "zoned": false, 00:25:10.087 "supported_io_types": { 00:25:10.087 "read": true, 00:25:10.087 "write": true, 00:25:10.087 "unmap": false, 00:25:10.087 "flush": true, 00:25:10.087 "reset": true, 00:25:10.087 "nvme_admin": true, 00:25:10.087 "nvme_io": true, 00:25:10.087 "nvme_io_md": false, 00:25:10.087 "write_zeroes": true, 00:25:10.087 "zcopy": false, 00:25:10.087 "get_zone_info": false, 00:25:10.087 "zone_management": false, 00:25:10.087 "zone_append": false, 00:25:10.087 "compare": true, 00:25:10.087 "compare_and_write": true, 00:25:10.087 "abort": true, 00:25:10.087 "seek_hole": false, 00:25:10.087 "seek_data": false, 00:25:10.087 "copy": true, 00:25:10.087 "nvme_iov_md": false 00:25:10.087 }, 00:25:10.087 "memory_domains": [ 00:25:10.087 { 00:25:10.087 "dma_device_id": "system", 00:25:10.087 "dma_device_type": 1 00:25:10.087 } 00:25:10.087 ], 00:25:10.087 "driver_specific": { 00:25:10.087 "nvme": [ 00:25:10.087 { 00:25:10.087 "trid": { 00:25:10.087 "trtype": "TCP", 00:25:10.087 "adrfam": "IPv4", 00:25:10.087 "traddr": "10.0.0.2", 00:25:10.087 "trsvcid": "4421", 00:25:10.087 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:10.087 }, 00:25:10.087 "ctrlr_data": { 00:25:10.087 "cntlid": 3, 00:25:10.087 "vendor_id": "0x8086", 00:25:10.087 "model_number": "SPDK bdev Controller", 00:25:10.087 "serial_number": "00000000000000000000", 00:25:10.087 "firmware_revision": "25.01", 00:25:10.087 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:10.087 "oacs": { 00:25:10.087 "security": 0, 00:25:10.087 "format": 0, 00:25:10.087 "firmware": 0, 00:25:10.087 "ns_manage": 0 00:25:10.087 }, 00:25:10.087 "multi_ctrlr": true, 00:25:10.087 "ana_reporting": false 00:25:10.087 }, 00:25:10.087 "vs": { 00:25:10.087 "nvme_version": "1.3" 00:25:10.087 }, 00:25:10.087 "ns_data": { 00:25:10.087 "id": 1, 00:25:10.087 "can_share": true 00:25:10.087 } 00:25:10.087 } 00:25:10.087 ], 00:25:10.087 "mp_policy": "active_passive" 00:25:10.087 } 00:25:10.087 } 00:25:10.087 ] 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.UHRGmHsxwA 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:10.087 rmmod nvme_tcp 00:25:10.087 rmmod nvme_fabrics 00:25:10.087 rmmod nvme_keyring 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 2151317 ']' 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 2151317 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # '[' -z 2151317 ']' 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # kill -0 2151317 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # uname 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:10.087 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2151317 00:25:10.347 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:10.347 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:10.347 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2151317' 00:25:10.347 killing process with pid 2151317 00:25:10.347 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@971 -- # kill 2151317 00:25:10.347 06:35:41 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@976 -- # wait 2151317 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:10.347 06:35:42 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.889 06:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:12.889 00:25:12.889 real 0m5.769s 00:25:12.889 user 0m2.187s 00:25:12.889 sys 0m2.000s 00:25:12.889 06:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:12.889 06:35:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:12.889 ************************************ 00:25:12.889 END TEST nvmf_async_init 00:25:12.889 ************************************ 00:25:12.889 06:35:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:12.889 06:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:12.889 06:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:12.889 06:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.889 ************************************ 00:25:12.889 START TEST dma 00:25:12.889 ************************************ 00:25:12.889 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:12.889 * Looking for test storage... 00:25:12.889 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lcov --version 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:12.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.890 --rc genhtml_branch_coverage=1 00:25:12.890 --rc genhtml_function_coverage=1 00:25:12.890 --rc genhtml_legend=1 00:25:12.890 --rc geninfo_all_blocks=1 00:25:12.890 --rc geninfo_unexecuted_blocks=1 00:25:12.890 00:25:12.890 ' 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:12.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.890 --rc genhtml_branch_coverage=1 00:25:12.890 --rc genhtml_function_coverage=1 00:25:12.890 --rc genhtml_legend=1 00:25:12.890 --rc geninfo_all_blocks=1 00:25:12.890 --rc geninfo_unexecuted_blocks=1 00:25:12.890 00:25:12.890 ' 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:12.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.890 --rc genhtml_branch_coverage=1 00:25:12.890 --rc genhtml_function_coverage=1 00:25:12.890 --rc genhtml_legend=1 00:25:12.890 --rc geninfo_all_blocks=1 00:25:12.890 --rc geninfo_unexecuted_blocks=1 00:25:12.890 00:25:12.890 ' 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:12.890 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.890 --rc genhtml_branch_coverage=1 00:25:12.890 --rc genhtml_function_coverage=1 00:25:12.890 --rc genhtml_legend=1 00:25:12.890 --rc geninfo_all_blocks=1 00:25:12.890 --rc geninfo_unexecuted_blocks=1 00:25:12.890 00:25:12.890 ' 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.890 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.891 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:12.891 00:25:12.891 real 0m0.171s 00:25:12.891 user 0m0.120s 00:25:12.891 sys 0m0.060s 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:12.891 ************************************ 00:25:12.891 END TEST dma 00:25:12.891 ************************************ 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.891 ************************************ 00:25:12.891 START TEST nvmf_identify 00:25:12.891 ************************************ 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:12.891 * Looking for test storage... 00:25:12.891 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lcov --version 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.891 --rc genhtml_branch_coverage=1 00:25:12.891 --rc genhtml_function_coverage=1 00:25:12.891 --rc genhtml_legend=1 00:25:12.891 --rc geninfo_all_blocks=1 00:25:12.891 --rc geninfo_unexecuted_blocks=1 00:25:12.891 00:25:12.891 ' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.891 --rc genhtml_branch_coverage=1 00:25:12.891 --rc genhtml_function_coverage=1 00:25:12.891 --rc genhtml_legend=1 00:25:12.891 --rc geninfo_all_blocks=1 00:25:12.891 --rc geninfo_unexecuted_blocks=1 00:25:12.891 00:25:12.891 ' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.891 --rc genhtml_branch_coverage=1 00:25:12.891 --rc genhtml_function_coverage=1 00:25:12.891 --rc genhtml_legend=1 00:25:12.891 --rc geninfo_all_blocks=1 00:25:12.891 --rc geninfo_unexecuted_blocks=1 00:25:12.891 00:25:12.891 ' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:12.891 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:12.891 --rc genhtml_branch_coverage=1 00:25:12.891 --rc genhtml_function_coverage=1 00:25:12.891 --rc genhtml_legend=1 00:25:12.891 --rc geninfo_all_blocks=1 00:25:12.891 --rc geninfo_unexecuted_blocks=1 00:25:12.891 00:25:12.891 ' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:12.891 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:12.892 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:25:12.892 06:35:44 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:15.422 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:15.423 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:15.423 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:15.423 Found net devices under 0000:09:00.0: cvl_0_0 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:15.423 Found net devices under 0000:09:00.1: cvl_0_1 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:15.423 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:15.423 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.326 ms 00:25:15.423 00:25:15.423 --- 10.0.0.2 ping statistics --- 00:25:15.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.423 rtt min/avg/max/mdev = 0.326/0.326/0.326/0.000 ms 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:15.423 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:15.423 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:25:15.423 00:25:15.423 --- 10.0.0.1 ping statistics --- 00:25:15.423 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:15.423 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2153459 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2153459 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # '[' -z 2153459 ']' 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:15.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:15.423 06:35:46 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.423 [2024-11-20 06:35:46.946254] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:25:15.423 [2024-11-20 06:35:46.946343] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:15.423 [2024-11-20 06:35:47.017411] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:15.423 [2024-11-20 06:35:47.076062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:15.423 [2024-11-20 06:35:47.076129] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:15.423 [2024-11-20 06:35:47.076143] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:15.423 [2024-11-20 06:35:47.076153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:15.423 [2024-11-20 06:35:47.076162] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:15.423 [2024-11-20 06:35:47.077918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:15.424 [2024-11-20 06:35:47.077982] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:15.424 [2024-11-20 06:35:47.078089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:15.424 [2024-11-20 06:35:47.078097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@866 -- # return 0 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.424 [2024-11-20 06:35:47.202961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.424 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.682 Malloc0 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.682 [2024-11-20 06:35:47.290609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:15.682 [ 00:25:15.682 { 00:25:15.682 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:15.682 "subtype": "Discovery", 00:25:15.682 "listen_addresses": [ 00:25:15.682 { 00:25:15.682 "trtype": "TCP", 00:25:15.682 "adrfam": "IPv4", 00:25:15.682 "traddr": "10.0.0.2", 00:25:15.682 "trsvcid": "4420" 00:25:15.682 } 00:25:15.682 ], 00:25:15.682 "allow_any_host": true, 00:25:15.682 "hosts": [] 00:25:15.682 }, 00:25:15.682 { 00:25:15.682 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:15.682 "subtype": "NVMe", 00:25:15.682 "listen_addresses": [ 00:25:15.682 { 00:25:15.682 "trtype": "TCP", 00:25:15.682 "adrfam": "IPv4", 00:25:15.682 "traddr": "10.0.0.2", 00:25:15.682 "trsvcid": "4420" 00:25:15.682 } 00:25:15.682 ], 00:25:15.682 "allow_any_host": true, 00:25:15.682 "hosts": [], 00:25:15.682 "serial_number": "SPDK00000000000001", 00:25:15.682 "model_number": "SPDK bdev Controller", 00:25:15.682 "max_namespaces": 32, 00:25:15.682 "min_cntlid": 1, 00:25:15.682 "max_cntlid": 65519, 00:25:15.682 "namespaces": [ 00:25:15.682 { 00:25:15.682 "nsid": 1, 00:25:15.682 "bdev_name": "Malloc0", 00:25:15.682 "name": "Malloc0", 00:25:15.682 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:15.682 "eui64": "ABCDEF0123456789", 00:25:15.682 "uuid": "a22f35c4-8eca-4319-86cc-717f56ab0c74" 00:25:15.682 } 00:25:15.682 ] 00:25:15.682 } 00:25:15.682 ] 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:15.682 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:15.682 [2024-11-20 06:35:47.332976] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:25:15.682 [2024-11-20 06:35:47.333019] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153597 ] 00:25:15.682 [2024-11-20 06:35:47.382527] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:15.682 [2024-11-20 06:35:47.382597] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:15.682 [2024-11-20 06:35:47.382624] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:15.682 [2024-11-20 06:35:47.382644] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:15.682 [2024-11-20 06:35:47.382661] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:15.682 [2024-11-20 06:35:47.386760] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:15.682 [2024-11-20 06:35:47.386823] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x64f690 0 00:25:15.682 [2024-11-20 06:35:47.394337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:15.682 [2024-11-20 06:35:47.394368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:15.682 [2024-11-20 06:35:47.394377] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:15.682 [2024-11-20 06:35:47.394384] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:15.682 [2024-11-20 06:35:47.394429] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.682 [2024-11-20 06:35:47.394444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.682 [2024-11-20 06:35:47.394452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x64f690) 00:25:15.682 [2024-11-20 06:35:47.394469] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:15.682 [2024-11-20 06:35:47.394496] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1100, cid 0, qid 0 00:25:15.682 [2024-11-20 06:35:47.402331] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.682 [2024-11-20 06:35:47.402350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.683 [2024-11-20 06:35:47.402357] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.402368] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1100) on tqpair=0x64f690 00:25:15.683 [2024-11-20 06:35:47.402388] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:15.683 [2024-11-20 06:35:47.402400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:15.683 [2024-11-20 06:35:47.402410] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:15.683 [2024-11-20 06:35:47.402431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.402440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.402446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x64f690) 00:25:15.683 [2024-11-20 06:35:47.402457] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.683 [2024-11-20 06:35:47.402485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1100, cid 0, qid 0 00:25:15.683 [2024-11-20 06:35:47.402601] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.683 [2024-11-20 06:35:47.402614] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.683 [2024-11-20 06:35:47.402621] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.402628] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1100) on tqpair=0x64f690 00:25:15.683 [2024-11-20 06:35:47.402637] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:15.683 [2024-11-20 06:35:47.402650] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:15.683 [2024-11-20 06:35:47.402663] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.402670] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.402676] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x64f690) 00:25:15.683 [2024-11-20 06:35:47.402687] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.683 [2024-11-20 06:35:47.402709] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1100, cid 0, qid 0 00:25:15.683 [2024-11-20 06:35:47.402800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.683 [2024-11-20 06:35:47.402813] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.683 [2024-11-20 06:35:47.402820] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.402826] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1100) on tqpair=0x64f690 00:25:15.683 [2024-11-20 06:35:47.402835] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:15.683 [2024-11-20 06:35:47.402849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:15.683 [2024-11-20 06:35:47.402861] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.402869] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.402875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x64f690) 00:25:15.683 [2024-11-20 06:35:47.402887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.683 [2024-11-20 06:35:47.402908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1100, cid 0, qid 0 00:25:15.683 [2024-11-20 06:35:47.402987] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.683 [2024-11-20 06:35:47.403001] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.683 [2024-11-20 06:35:47.403008] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1100) on tqpair=0x64f690 00:25:15.683 [2024-11-20 06:35:47.403023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:15.683 [2024-11-20 06:35:47.403040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403049] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403056] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x64f690) 00:25:15.683 [2024-11-20 06:35:47.403066] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.683 [2024-11-20 06:35:47.403087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1100, cid 0, qid 0 00:25:15.683 [2024-11-20 06:35:47.403161] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.683 [2024-11-20 06:35:47.403173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.683 [2024-11-20 06:35:47.403186] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403194] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1100) on tqpair=0x64f690 00:25:15.683 [2024-11-20 06:35:47.403203] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:15.683 [2024-11-20 06:35:47.403213] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:15.683 [2024-11-20 06:35:47.403227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:15.683 [2024-11-20 06:35:47.403338] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:15.683 [2024-11-20 06:35:47.403349] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:15.683 [2024-11-20 06:35:47.403363] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403371] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403377] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x64f690) 00:25:15.683 [2024-11-20 06:35:47.403388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.683 [2024-11-20 06:35:47.403410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1100, cid 0, qid 0 00:25:15.683 [2024-11-20 06:35:47.403500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.683 [2024-11-20 06:35:47.403513] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.683 [2024-11-20 06:35:47.403521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403527] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1100) on tqpair=0x64f690 00:25:15.683 [2024-11-20 06:35:47.403536] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:15.683 [2024-11-20 06:35:47.403553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403562] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403569] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x64f690) 00:25:15.683 [2024-11-20 06:35:47.403579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.683 [2024-11-20 06:35:47.403600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1100, cid 0, qid 0 00:25:15.683 [2024-11-20 06:35:47.403678] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.683 [2024-11-20 06:35:47.403691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.683 [2024-11-20 06:35:47.403698] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403705] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1100) on tqpair=0x64f690 00:25:15.683 [2024-11-20 06:35:47.403712] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:15.683 [2024-11-20 06:35:47.403722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:15.683 [2024-11-20 06:35:47.403735] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:15.683 [2024-11-20 06:35:47.403754] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:15.683 [2024-11-20 06:35:47.403770] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403783] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x64f690) 00:25:15.683 [2024-11-20 06:35:47.403794] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.683 [2024-11-20 06:35:47.403817] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1100, cid 0, qid 0 00:25:15.683 [2024-11-20 06:35:47.403940] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.683 [2024-11-20 06:35:47.403953] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.683 [2024-11-20 06:35:47.403960] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.403978] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x64f690): datao=0, datal=4096, cccid=0 00:25:15.683 [2024-11-20 06:35:47.403986] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b1100) on tqpair(0x64f690): expected_datao=0, payload_size=4096 00:25:15.683 [2024-11-20 06:35:47.403993] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.404004] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.404012] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.404024] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.683 [2024-11-20 06:35:47.404034] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.683 [2024-11-20 06:35:47.404040] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.683 [2024-11-20 06:35:47.404047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1100) on tqpair=0x64f690 00:25:15.683 [2024-11-20 06:35:47.404059] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:15.683 [2024-11-20 06:35:47.404068] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:15.683 [2024-11-20 06:35:47.404075] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:15.684 [2024-11-20 06:35:47.404088] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:15.684 [2024-11-20 06:35:47.404098] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:15.684 [2024-11-20 06:35:47.404106] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:15.684 [2024-11-20 06:35:47.404124] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:15.684 [2024-11-20 06:35:47.404137] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x64f690) 00:25:15.684 [2024-11-20 06:35:47.404162] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:15.684 [2024-11-20 06:35:47.404183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1100, cid 0, qid 0 00:25:15.684 [2024-11-20 06:35:47.404273] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.684 [2024-11-20 06:35:47.404285] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.684 [2024-11-20 06:35:47.404292] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1100) on tqpair=0x64f690 00:25:15.684 [2024-11-20 06:35:47.404320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404335] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x64f690) 00:25:15.684 [2024-11-20 06:35:47.404359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.684 [2024-11-20 06:35:47.404370] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404377] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x64f690) 00:25:15.684 [2024-11-20 06:35:47.404393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.684 [2024-11-20 06:35:47.404403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404416] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x64f690) 00:25:15.684 [2024-11-20 06:35:47.404424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.684 [2024-11-20 06:35:47.404434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404441] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404447] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.684 [2024-11-20 06:35:47.404456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.684 [2024-11-20 06:35:47.404465] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:15.684 [2024-11-20 06:35:47.404480] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:15.684 [2024-11-20 06:35:47.404491] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404499] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x64f690) 00:25:15.684 [2024-11-20 06:35:47.404509] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.684 [2024-11-20 06:35:47.404531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1100, cid 0, qid 0 00:25:15.684 [2024-11-20 06:35:47.404543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1280, cid 1, qid 0 00:25:15.684 [2024-11-20 06:35:47.404551] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1400, cid 2, qid 0 00:25:15.684 [2024-11-20 06:35:47.404559] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.684 [2024-11-20 06:35:47.404566] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1700, cid 4, qid 0 00:25:15.684 [2024-11-20 06:35:47.404687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.684 [2024-11-20 06:35:47.404701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.684 [2024-11-20 06:35:47.404708] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404715] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1700) on tqpair=0x64f690 00:25:15.684 [2024-11-20 06:35:47.404728] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:15.684 [2024-11-20 06:35:47.404738] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:15.684 [2024-11-20 06:35:47.404756] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404765] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x64f690) 00:25:15.684 [2024-11-20 06:35:47.404776] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.684 [2024-11-20 06:35:47.404801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1700, cid 4, qid 0 00:25:15.684 [2024-11-20 06:35:47.404930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.684 [2024-11-20 06:35:47.404945] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.684 [2024-11-20 06:35:47.404952] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404958] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x64f690): datao=0, datal=4096, cccid=4 00:25:15.684 [2024-11-20 06:35:47.404965] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b1700) on tqpair(0x64f690): expected_datao=0, payload_size=4096 00:25:15.684 [2024-11-20 06:35:47.404973] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404989] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.404998] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.446314] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.684 [2024-11-20 06:35:47.446333] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.684 [2024-11-20 06:35:47.446340] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.446347] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1700) on tqpair=0x64f690 00:25:15.684 [2024-11-20 06:35:47.446368] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:15.684 [2024-11-20 06:35:47.446403] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.446414] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x64f690) 00:25:15.684 [2024-11-20 06:35:47.446425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.684 [2024-11-20 06:35:47.446437] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.446444] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.446450] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x64f690) 00:25:15.684 [2024-11-20 06:35:47.446459] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.684 [2024-11-20 06:35:47.446487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1700, cid 4, qid 0 00:25:15.684 [2024-11-20 06:35:47.446516] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1880, cid 5, qid 0 00:25:15.684 [2024-11-20 06:35:47.446664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.684 [2024-11-20 06:35:47.446678] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.684 [2024-11-20 06:35:47.446685] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.446692] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x64f690): datao=0, datal=1024, cccid=4 00:25:15.684 [2024-11-20 06:35:47.446699] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b1700) on tqpair(0x64f690): expected_datao=0, payload_size=1024 00:25:15.684 [2024-11-20 06:35:47.446707] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.446716] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.446724] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.446732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.684 [2024-11-20 06:35:47.446741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.684 [2024-11-20 06:35:47.446748] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.446755] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1880) on tqpair=0x64f690 00:25:15.684 [2024-11-20 06:35:47.488319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.684 [2024-11-20 06:35:47.488342] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.684 [2024-11-20 06:35:47.488350] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.488357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1700) on tqpair=0x64f690 00:25:15.684 [2024-11-20 06:35:47.488375] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.684 [2024-11-20 06:35:47.488384] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x64f690) 00:25:15.685 [2024-11-20 06:35:47.488395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.685 [2024-11-20 06:35:47.488426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1700, cid 4, qid 0 00:25:15.685 [2024-11-20 06:35:47.488552] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.685 [2024-11-20 06:35:47.488564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.685 [2024-11-20 06:35:47.488571] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.685 [2024-11-20 06:35:47.488578] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x64f690): datao=0, datal=3072, cccid=4 00:25:15.685 [2024-11-20 06:35:47.488585] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b1700) on tqpair(0x64f690): expected_datao=0, payload_size=3072 00:25:15.685 [2024-11-20 06:35:47.488593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.685 [2024-11-20 06:35:47.488603] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.685 [2024-11-20 06:35:47.488610] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.685 [2024-11-20 06:35:47.488622] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.685 [2024-11-20 06:35:47.488632] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.685 [2024-11-20 06:35:47.488638] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.685 [2024-11-20 06:35:47.488645] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1700) on tqpair=0x64f690 00:25:15.685 [2024-11-20 06:35:47.488660] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.685 [2024-11-20 06:35:47.488669] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x64f690) 00:25:15.685 [2024-11-20 06:35:47.488679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.685 [2024-11-20 06:35:47.488707] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1700, cid 4, qid 0 00:25:15.685 [2024-11-20 06:35:47.488800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.685 [2024-11-20 06:35:47.488812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.685 [2024-11-20 06:35:47.488819] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.685 [2024-11-20 06:35:47.488825] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x64f690): datao=0, datal=8, cccid=4 00:25:15.685 [2024-11-20 06:35:47.488833] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x6b1700) on tqpair(0x64f690): expected_datao=0, payload_size=8 00:25:15.685 [2024-11-20 06:35:47.488840] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.685 [2024-11-20 06:35:47.488849] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.685 [2024-11-20 06:35:47.488857] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.946 [2024-11-20 06:35:47.530379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.946 [2024-11-20 06:35:47.530397] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.946 [2024-11-20 06:35:47.530405] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.946 [2024-11-20 06:35:47.530412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1700) on tqpair=0x64f690 00:25:15.946 ===================================================== 00:25:15.946 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:15.946 ===================================================== 00:25:15.946 Controller Capabilities/Features 00:25:15.946 ================================ 00:25:15.946 Vendor ID: 0000 00:25:15.946 Subsystem Vendor ID: 0000 00:25:15.946 Serial Number: .................... 00:25:15.946 Model Number: ........................................ 00:25:15.946 Firmware Version: 25.01 00:25:15.946 Recommended Arb Burst: 0 00:25:15.946 IEEE OUI Identifier: 00 00 00 00:25:15.946 Multi-path I/O 00:25:15.946 May have multiple subsystem ports: No 00:25:15.946 May have multiple controllers: No 00:25:15.946 Associated with SR-IOV VF: No 00:25:15.946 Max Data Transfer Size: 131072 00:25:15.946 Max Number of Namespaces: 0 00:25:15.946 Max Number of I/O Queues: 1024 00:25:15.946 NVMe Specification Version (VS): 1.3 00:25:15.946 NVMe Specification Version (Identify): 1.3 00:25:15.946 Maximum Queue Entries: 128 00:25:15.946 Contiguous Queues Required: Yes 00:25:15.946 Arbitration Mechanisms Supported 00:25:15.946 Weighted Round Robin: Not Supported 00:25:15.946 Vendor Specific: Not Supported 00:25:15.946 Reset Timeout: 15000 ms 00:25:15.946 Doorbell Stride: 4 bytes 00:25:15.946 NVM Subsystem Reset: Not Supported 00:25:15.946 Command Sets Supported 00:25:15.946 NVM Command Set: Supported 00:25:15.946 Boot Partition: Not Supported 00:25:15.946 Memory Page Size Minimum: 4096 bytes 00:25:15.946 Memory Page Size Maximum: 4096 bytes 00:25:15.946 Persistent Memory Region: Not Supported 00:25:15.946 Optional Asynchronous Events Supported 00:25:15.946 Namespace Attribute Notices: Not Supported 00:25:15.946 Firmware Activation Notices: Not Supported 00:25:15.946 ANA Change Notices: Not Supported 00:25:15.946 PLE Aggregate Log Change Notices: Not Supported 00:25:15.946 LBA Status Info Alert Notices: Not Supported 00:25:15.946 EGE Aggregate Log Change Notices: Not Supported 00:25:15.946 Normal NVM Subsystem Shutdown event: Not Supported 00:25:15.946 Zone Descriptor Change Notices: Not Supported 00:25:15.946 Discovery Log Change Notices: Supported 00:25:15.946 Controller Attributes 00:25:15.946 128-bit Host Identifier: Not Supported 00:25:15.946 Non-Operational Permissive Mode: Not Supported 00:25:15.946 NVM Sets: Not Supported 00:25:15.946 Read Recovery Levels: Not Supported 00:25:15.946 Endurance Groups: Not Supported 00:25:15.946 Predictable Latency Mode: Not Supported 00:25:15.946 Traffic Based Keep ALive: Not Supported 00:25:15.946 Namespace Granularity: Not Supported 00:25:15.946 SQ Associations: Not Supported 00:25:15.946 UUID List: Not Supported 00:25:15.946 Multi-Domain Subsystem: Not Supported 00:25:15.946 Fixed Capacity Management: Not Supported 00:25:15.946 Variable Capacity Management: Not Supported 00:25:15.946 Delete Endurance Group: Not Supported 00:25:15.946 Delete NVM Set: Not Supported 00:25:15.946 Extended LBA Formats Supported: Not Supported 00:25:15.946 Flexible Data Placement Supported: Not Supported 00:25:15.946 00:25:15.946 Controller Memory Buffer Support 00:25:15.946 ================================ 00:25:15.946 Supported: No 00:25:15.946 00:25:15.946 Persistent Memory Region Support 00:25:15.946 ================================ 00:25:15.946 Supported: No 00:25:15.946 00:25:15.946 Admin Command Set Attributes 00:25:15.946 ============================ 00:25:15.946 Security Send/Receive: Not Supported 00:25:15.946 Format NVM: Not Supported 00:25:15.946 Firmware Activate/Download: Not Supported 00:25:15.946 Namespace Management: Not Supported 00:25:15.946 Device Self-Test: Not Supported 00:25:15.946 Directives: Not Supported 00:25:15.946 NVMe-MI: Not Supported 00:25:15.946 Virtualization Management: Not Supported 00:25:15.946 Doorbell Buffer Config: Not Supported 00:25:15.946 Get LBA Status Capability: Not Supported 00:25:15.946 Command & Feature Lockdown Capability: Not Supported 00:25:15.946 Abort Command Limit: 1 00:25:15.946 Async Event Request Limit: 4 00:25:15.946 Number of Firmware Slots: N/A 00:25:15.946 Firmware Slot 1 Read-Only: N/A 00:25:15.946 Firmware Activation Without Reset: N/A 00:25:15.946 Multiple Update Detection Support: N/A 00:25:15.947 Firmware Update Granularity: No Information Provided 00:25:15.947 Per-Namespace SMART Log: No 00:25:15.947 Asymmetric Namespace Access Log Page: Not Supported 00:25:15.947 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:15.947 Command Effects Log Page: Not Supported 00:25:15.947 Get Log Page Extended Data: Supported 00:25:15.947 Telemetry Log Pages: Not Supported 00:25:15.947 Persistent Event Log Pages: Not Supported 00:25:15.947 Supported Log Pages Log Page: May Support 00:25:15.947 Commands Supported & Effects Log Page: Not Supported 00:25:15.947 Feature Identifiers & Effects Log Page:May Support 00:25:15.947 NVMe-MI Commands & Effects Log Page: May Support 00:25:15.947 Data Area 4 for Telemetry Log: Not Supported 00:25:15.947 Error Log Page Entries Supported: 128 00:25:15.947 Keep Alive: Not Supported 00:25:15.947 00:25:15.947 NVM Command Set Attributes 00:25:15.947 ========================== 00:25:15.947 Submission Queue Entry Size 00:25:15.947 Max: 1 00:25:15.947 Min: 1 00:25:15.947 Completion Queue Entry Size 00:25:15.947 Max: 1 00:25:15.947 Min: 1 00:25:15.947 Number of Namespaces: 0 00:25:15.947 Compare Command: Not Supported 00:25:15.947 Write Uncorrectable Command: Not Supported 00:25:15.947 Dataset Management Command: Not Supported 00:25:15.947 Write Zeroes Command: Not Supported 00:25:15.947 Set Features Save Field: Not Supported 00:25:15.947 Reservations: Not Supported 00:25:15.947 Timestamp: Not Supported 00:25:15.947 Copy: Not Supported 00:25:15.947 Volatile Write Cache: Not Present 00:25:15.947 Atomic Write Unit (Normal): 1 00:25:15.947 Atomic Write Unit (PFail): 1 00:25:15.947 Atomic Compare & Write Unit: 1 00:25:15.947 Fused Compare & Write: Supported 00:25:15.947 Scatter-Gather List 00:25:15.947 SGL Command Set: Supported 00:25:15.947 SGL Keyed: Supported 00:25:15.947 SGL Bit Bucket Descriptor: Not Supported 00:25:15.947 SGL Metadata Pointer: Not Supported 00:25:15.947 Oversized SGL: Not Supported 00:25:15.947 SGL Metadata Address: Not Supported 00:25:15.947 SGL Offset: Supported 00:25:15.947 Transport SGL Data Block: Not Supported 00:25:15.947 Replay Protected Memory Block: Not Supported 00:25:15.947 00:25:15.947 Firmware Slot Information 00:25:15.947 ========================= 00:25:15.947 Active slot: 0 00:25:15.947 00:25:15.947 00:25:15.947 Error Log 00:25:15.947 ========= 00:25:15.947 00:25:15.947 Active Namespaces 00:25:15.947 ================= 00:25:15.947 Discovery Log Page 00:25:15.947 ================== 00:25:15.947 Generation Counter: 2 00:25:15.947 Number of Records: 2 00:25:15.947 Record Format: 0 00:25:15.947 00:25:15.947 Discovery Log Entry 0 00:25:15.947 ---------------------- 00:25:15.947 Transport Type: 3 (TCP) 00:25:15.947 Address Family: 1 (IPv4) 00:25:15.947 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:15.947 Entry Flags: 00:25:15.947 Duplicate Returned Information: 1 00:25:15.947 Explicit Persistent Connection Support for Discovery: 1 00:25:15.947 Transport Requirements: 00:25:15.947 Secure Channel: Not Required 00:25:15.947 Port ID: 0 (0x0000) 00:25:15.947 Controller ID: 65535 (0xffff) 00:25:15.947 Admin Max SQ Size: 128 00:25:15.947 Transport Service Identifier: 4420 00:25:15.947 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:15.947 Transport Address: 10.0.0.2 00:25:15.947 Discovery Log Entry 1 00:25:15.947 ---------------------- 00:25:15.947 Transport Type: 3 (TCP) 00:25:15.947 Address Family: 1 (IPv4) 00:25:15.947 Subsystem Type: 2 (NVM Subsystem) 00:25:15.947 Entry Flags: 00:25:15.947 Duplicate Returned Information: 0 00:25:15.947 Explicit Persistent Connection Support for Discovery: 0 00:25:15.947 Transport Requirements: 00:25:15.947 Secure Channel: Not Required 00:25:15.947 Port ID: 0 (0x0000) 00:25:15.947 Controller ID: 65535 (0xffff) 00:25:15.947 Admin Max SQ Size: 128 00:25:15.947 Transport Service Identifier: 4420 00:25:15.947 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:15.947 Transport Address: 10.0.0.2 [2024-11-20 06:35:47.530524] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:15.947 [2024-11-20 06:35:47.530547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1100) on tqpair=0x64f690 00:25:15.947 [2024-11-20 06:35:47.530562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.947 [2024-11-20 06:35:47.530572] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1280) on tqpair=0x64f690 00:25:15.947 [2024-11-20 06:35:47.530580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.947 [2024-11-20 06:35:47.530588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1400) on tqpair=0x64f690 00:25:15.947 [2024-11-20 06:35:47.530596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.947 [2024-11-20 06:35:47.530604] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.947 [2024-11-20 06:35:47.530612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.947 [2024-11-20 06:35:47.530629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.530639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.530646] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.947 [2024-11-20 06:35:47.530657] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.947 [2024-11-20 06:35:47.530696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.947 [2024-11-20 06:35:47.530852] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.947 [2024-11-20 06:35:47.530867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.947 [2024-11-20 06:35:47.530874] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.530881] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.947 [2024-11-20 06:35:47.530894] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.530901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.530908] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.947 [2024-11-20 06:35:47.530919] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.947 [2024-11-20 06:35:47.530946] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.947 [2024-11-20 06:35:47.531039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.947 [2024-11-20 06:35:47.531054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.947 [2024-11-20 06:35:47.531061] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.531068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.947 [2024-11-20 06:35:47.531076] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:15.947 [2024-11-20 06:35:47.531084] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:15.947 [2024-11-20 06:35:47.531100] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.531109] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.531116] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.947 [2024-11-20 06:35:47.531127] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.947 [2024-11-20 06:35:47.531147] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.947 [2024-11-20 06:35:47.531225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.947 [2024-11-20 06:35:47.531238] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.947 [2024-11-20 06:35:47.531250] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.531258] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.947 [2024-11-20 06:35:47.531275] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.531285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.531291] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.947 [2024-11-20 06:35:47.531311] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.947 [2024-11-20 06:35:47.531335] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.947 [2024-11-20 06:35:47.531412] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.947 [2024-11-20 06:35:47.531424] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.947 [2024-11-20 06:35:47.531431] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.531438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.947 [2024-11-20 06:35:47.531453] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.531462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.947 [2024-11-20 06:35:47.531469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.947 [2024-11-20 06:35:47.531480] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.948 [2024-11-20 06:35:47.531500] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.948 [2024-11-20 06:35:47.531572] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.948 [2024-11-20 06:35:47.531585] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.948 [2024-11-20 06:35:47.531592] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.531599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.948 [2024-11-20 06:35:47.531615] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.531624] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.531631] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.948 [2024-11-20 06:35:47.531641] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.948 [2024-11-20 06:35:47.531662] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.948 [2024-11-20 06:35:47.531735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.948 [2024-11-20 06:35:47.531747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.948 [2024-11-20 06:35:47.531755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.531761] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.948 [2024-11-20 06:35:47.531777] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.531786] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.531792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.948 [2024-11-20 06:35:47.531803] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.948 [2024-11-20 06:35:47.531823] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.948 [2024-11-20 06:35:47.531899] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.948 [2024-11-20 06:35:47.531911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.948 [2024-11-20 06:35:47.531918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.531929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.948 [2024-11-20 06:35:47.531945] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.531954] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.531961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.948 [2024-11-20 06:35:47.531972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.948 [2024-11-20 06:35:47.531992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.948 [2024-11-20 06:35:47.532061] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.948 [2024-11-20 06:35:47.532074] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.948 [2024-11-20 06:35:47.532081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.532087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.948 [2024-11-20 06:35:47.532103] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.532112] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.532119] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.948 [2024-11-20 06:35:47.532129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.948 [2024-11-20 06:35:47.532149] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.948 [2024-11-20 06:35:47.532225] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.948 [2024-11-20 06:35:47.532237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.948 [2024-11-20 06:35:47.532244] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.532251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.948 [2024-11-20 06:35:47.532266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.532274] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.532281] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x64f690) 00:25:15.948 [2024-11-20 06:35:47.532292] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.948 [2024-11-20 06:35:47.536336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x6b1580, cid 3, qid 0 00:25:15.948 [2024-11-20 06:35:47.536442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.948 [2024-11-20 06:35:47.536457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.948 [2024-11-20 06:35:47.536464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.536471] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x6b1580) on tqpair=0x64f690 00:25:15.948 [2024-11-20 06:35:47.536485] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:25:15.948 00:25:15.948 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:15.948 [2024-11-20 06:35:47.573622] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:25:15.948 [2024-11-20 06:35:47.573670] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153609 ] 00:25:15.948 [2024-11-20 06:35:47.624753] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:15.948 [2024-11-20 06:35:47.624810] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:15.948 [2024-11-20 06:35:47.624821] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:15.948 [2024-11-20 06:35:47.624837] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:15.948 [2024-11-20 06:35:47.624851] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:15.948 [2024-11-20 06:35:47.628583] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:15.948 [2024-11-20 06:35:47.628635] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xc8e690 0 00:25:15.948 [2024-11-20 06:35:47.635346] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:15.948 [2024-11-20 06:35:47.635368] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:15.948 [2024-11-20 06:35:47.635377] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:15.948 [2024-11-20 06:35:47.635384] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:15.948 [2024-11-20 06:35:47.635419] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.635432] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.635439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8e690) 00:25:15.948 [2024-11-20 06:35:47.635454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:15.948 [2024-11-20 06:35:47.635482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0100, cid 0, qid 0 00:25:15.948 [2024-11-20 06:35:47.643316] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.948 [2024-11-20 06:35:47.643335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.948 [2024-11-20 06:35:47.643342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.643350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0100) on tqpair=0xc8e690 00:25:15.948 [2024-11-20 06:35:47.643364] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:15.948 [2024-11-20 06:35:47.643375] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:15.948 [2024-11-20 06:35:47.643384] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:15.948 [2024-11-20 06:35:47.643403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.643412] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.643419] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8e690) 00:25:15.948 [2024-11-20 06:35:47.643430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.948 [2024-11-20 06:35:47.643455] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0100, cid 0, qid 0 00:25:15.948 [2024-11-20 06:35:47.643579] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.948 [2024-11-20 06:35:47.643594] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.948 [2024-11-20 06:35:47.643601] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.643608] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0100) on tqpair=0xc8e690 00:25:15.948 [2024-11-20 06:35:47.643616] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:15.948 [2024-11-20 06:35:47.643630] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:15.948 [2024-11-20 06:35:47.643642] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.643654] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.643661] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8e690) 00:25:15.948 [2024-11-20 06:35:47.643672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.948 [2024-11-20 06:35:47.643694] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0100, cid 0, qid 0 00:25:15.948 [2024-11-20 06:35:47.643775] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.948 [2024-11-20 06:35:47.643789] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.948 [2024-11-20 06:35:47.643795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.948 [2024-11-20 06:35:47.643802] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0100) on tqpair=0xc8e690 00:25:15.948 [2024-11-20 06:35:47.643811] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:15.948 [2024-11-20 06:35:47.643825] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:15.948 [2024-11-20 06:35:47.643837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.643844] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.643851] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8e690) 00:25:15.949 [2024-11-20 06:35:47.643861] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.949 [2024-11-20 06:35:47.643883] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0100, cid 0, qid 0 00:25:15.949 [2024-11-20 06:35:47.643964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.949 [2024-11-20 06:35:47.643978] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.949 [2024-11-20 06:35:47.643985] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.643992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0100) on tqpair=0xc8e690 00:25:15.949 [2024-11-20 06:35:47.644000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:15.949 [2024-11-20 06:35:47.644017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644033] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8e690) 00:25:15.949 [2024-11-20 06:35:47.644043] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.949 [2024-11-20 06:35:47.644064] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0100, cid 0, qid 0 00:25:15.949 [2024-11-20 06:35:47.644140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.949 [2024-11-20 06:35:47.644155] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.949 [2024-11-20 06:35:47.644161] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0100) on tqpair=0xc8e690 00:25:15.949 [2024-11-20 06:35:47.644175] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:15.949 [2024-11-20 06:35:47.644184] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:15.949 [2024-11-20 06:35:47.644197] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:15.949 [2024-11-20 06:35:47.644310] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:15.949 [2024-11-20 06:35:47.644325] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:15.949 [2024-11-20 06:35:47.644339] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644347] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644353] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8e690) 00:25:15.949 [2024-11-20 06:35:47.644363] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.949 [2024-11-20 06:35:47.644385] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0100, cid 0, qid 0 00:25:15.949 [2024-11-20 06:35:47.644495] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.949 [2024-11-20 06:35:47.644509] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.949 [2024-11-20 06:35:47.644516] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644523] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0100) on tqpair=0xc8e690 00:25:15.949 [2024-11-20 06:35:47.644531] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:15.949 [2024-11-20 06:35:47.644548] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644563] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8e690) 00:25:15.949 [2024-11-20 06:35:47.644573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.949 [2024-11-20 06:35:47.644594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0100, cid 0, qid 0 00:25:15.949 [2024-11-20 06:35:47.644665] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.949 [2024-11-20 06:35:47.644677] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.949 [2024-11-20 06:35:47.644684] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644690] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0100) on tqpair=0xc8e690 00:25:15.949 [2024-11-20 06:35:47.644698] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:15.949 [2024-11-20 06:35:47.644706] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:15.949 [2024-11-20 06:35:47.644719] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:15.949 [2024-11-20 06:35:47.644734] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:15.949 [2024-11-20 06:35:47.644748] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644756] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8e690) 00:25:15.949 [2024-11-20 06:35:47.644767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.949 [2024-11-20 06:35:47.644789] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0100, cid 0, qid 0 00:25:15.949 [2024-11-20 06:35:47.644904] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.949 [2024-11-20 06:35:47.644921] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.949 [2024-11-20 06:35:47.644928] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644934] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8e690): datao=0, datal=4096, cccid=0 00:25:15.949 [2024-11-20 06:35:47.644942] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf0100) on tqpair(0xc8e690): expected_datao=0, payload_size=4096 00:25:15.949 [2024-11-20 06:35:47.644953] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644972] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.644981] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.685418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.949 [2024-11-20 06:35:47.685439] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.949 [2024-11-20 06:35:47.685447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.685454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0100) on tqpair=0xc8e690 00:25:15.949 [2024-11-20 06:35:47.685466] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:15.949 [2024-11-20 06:35:47.685475] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:15.949 [2024-11-20 06:35:47.685482] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:15.949 [2024-11-20 06:35:47.685494] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:15.949 [2024-11-20 06:35:47.685503] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:15.949 [2024-11-20 06:35:47.685512] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:15.949 [2024-11-20 06:35:47.685532] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:15.949 [2024-11-20 06:35:47.685546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.685554] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.685561] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8e690) 00:25:15.949 [2024-11-20 06:35:47.685572] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:15.949 [2024-11-20 06:35:47.685597] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0100, cid 0, qid 0 00:25:15.949 [2024-11-20 06:35:47.685681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.949 [2024-11-20 06:35:47.685695] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.949 [2024-11-20 06:35:47.685702] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.685709] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0100) on tqpair=0xc8e690 00:25:15.949 [2024-11-20 06:35:47.685719] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.685727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.685733] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xc8e690) 00:25:15.949 [2024-11-20 06:35:47.685743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.949 [2024-11-20 06:35:47.685753] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.685760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.685766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xc8e690) 00:25:15.949 [2024-11-20 06:35:47.685775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.949 [2024-11-20 06:35:47.685785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.949 [2024-11-20 06:35:47.685791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.685798] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xc8e690) 00:25:15.950 [2024-11-20 06:35:47.685806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.950 [2024-11-20 06:35:47.685820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.685828] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.685834] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8e690) 00:25:15.950 [2024-11-20 06:35:47.685843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.950 [2024-11-20 06:35:47.685852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.685867] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.685878] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.685885] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8e690) 00:25:15.950 [2024-11-20 06:35:47.685895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.950 [2024-11-20 06:35:47.685918] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0100, cid 0, qid 0 00:25:15.950 [2024-11-20 06:35:47.685929] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0280, cid 1, qid 0 00:25:15.950 [2024-11-20 06:35:47.685937] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0400, cid 2, qid 0 00:25:15.950 [2024-11-20 06:35:47.685945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0580, cid 3, qid 0 00:25:15.950 [2024-11-20 06:35:47.685954] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0700, cid 4, qid 0 00:25:15.950 [2024-11-20 06:35:47.686094] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.950 [2024-11-20 06:35:47.686107] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.950 [2024-11-20 06:35:47.686114] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.686120] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0700) on tqpair=0xc8e690 00:25:15.950 [2024-11-20 06:35:47.686133] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:15.950 [2024-11-20 06:35:47.686143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.686157] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.686168] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.686178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.686185] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.686192] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8e690) 00:25:15.950 [2024-11-20 06:35:47.686202] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:15.950 [2024-11-20 06:35:47.686224] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0700, cid 4, qid 0 00:25:15.950 [2024-11-20 06:35:47.686358] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.950 [2024-11-20 06:35:47.686373] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.950 [2024-11-20 06:35:47.686380] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.686386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0700) on tqpair=0xc8e690 00:25:15.950 [2024-11-20 06:35:47.686456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.686481] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.686496] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.686504] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8e690) 00:25:15.950 [2024-11-20 06:35:47.686515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.950 [2024-11-20 06:35:47.686537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0700, cid 4, qid 0 00:25:15.950 [2024-11-20 06:35:47.686675] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.950 [2024-11-20 06:35:47.686690] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.950 [2024-11-20 06:35:47.686697] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.686703] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8e690): datao=0, datal=4096, cccid=4 00:25:15.950 [2024-11-20 06:35:47.686711] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf0700) on tqpair(0xc8e690): expected_datao=0, payload_size=4096 00:25:15.950 [2024-11-20 06:35:47.686718] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.686735] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.686744] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731319] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.950 [2024-11-20 06:35:47.731337] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.950 [2024-11-20 06:35:47.731344] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731351] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0700) on tqpair=0xc8e690 00:25:15.950 [2024-11-20 06:35:47.731367] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:15.950 [2024-11-20 06:35:47.731389] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.731422] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.731438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731446] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8e690) 00:25:15.950 [2024-11-20 06:35:47.731458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.950 [2024-11-20 06:35:47.731482] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0700, cid 4, qid 0 00:25:15.950 [2024-11-20 06:35:47.731626] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.950 [2024-11-20 06:35:47.731641] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.950 [2024-11-20 06:35:47.731648] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731655] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8e690): datao=0, datal=4096, cccid=4 00:25:15.950 [2024-11-20 06:35:47.731662] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf0700) on tqpair(0xc8e690): expected_datao=0, payload_size=4096 00:25:15.950 [2024-11-20 06:35:47.731670] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731680] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731687] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.950 [2024-11-20 06:35:47.731709] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.950 [2024-11-20 06:35:47.731720] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731727] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0700) on tqpair=0xc8e690 00:25:15.950 [2024-11-20 06:35:47.731749] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.731769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.731783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8e690) 00:25:15.950 [2024-11-20 06:35:47.731802] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.950 [2024-11-20 06:35:47.731825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0700, cid 4, qid 0 00:25:15.950 [2024-11-20 06:35:47.731922] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.950 [2024-11-20 06:35:47.731937] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.950 [2024-11-20 06:35:47.731944] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731950] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8e690): datao=0, datal=4096, cccid=4 00:25:15.950 [2024-11-20 06:35:47.731957] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf0700) on tqpair(0xc8e690): expected_datao=0, payload_size=4096 00:25:15.950 [2024-11-20 06:35:47.731965] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731981] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.731990] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.772408] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.950 [2024-11-20 06:35:47.772427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.950 [2024-11-20 06:35:47.772435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.950 [2024-11-20 06:35:47.772442] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0700) on tqpair=0xc8e690 00:25:15.950 [2024-11-20 06:35:47.772456] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.772473] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.772489] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.772501] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.772510] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:15.950 [2024-11-20 06:35:47.772518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:15.951 [2024-11-20 06:35:47.772527] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:15.951 [2024-11-20 06:35:47.772535] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:15.951 [2024-11-20 06:35:47.772544] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:15.951 [2024-11-20 06:35:47.772563] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.772572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8e690) 00:25:15.951 [2024-11-20 06:35:47.772587] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.951 [2024-11-20 06:35:47.772600] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.772607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.772614] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc8e690) 00:25:15.951 [2024-11-20 06:35:47.772623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:15.951 [2024-11-20 06:35:47.772650] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0700, cid 4, qid 0 00:25:15.951 [2024-11-20 06:35:47.772663] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0880, cid 5, qid 0 00:25:15.951 [2024-11-20 06:35:47.772749] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.951 [2024-11-20 06:35:47.772761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.951 [2024-11-20 06:35:47.772768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.772775] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0700) on tqpair=0xc8e690 00:25:15.951 [2024-11-20 06:35:47.772785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.951 [2024-11-20 06:35:47.772794] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.951 [2024-11-20 06:35:47.772800] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.772807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0880) on tqpair=0xc8e690 00:25:15.951 [2024-11-20 06:35:47.772822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.772831] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc8e690) 00:25:15.951 [2024-11-20 06:35:47.772841] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.951 [2024-11-20 06:35:47.772862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0880, cid 5, qid 0 00:25:15.951 [2024-11-20 06:35:47.772947] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.951 [2024-11-20 06:35:47.772961] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.951 [2024-11-20 06:35:47.772968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.772975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0880) on tqpair=0xc8e690 00:25:15.951 [2024-11-20 06:35:47.772991] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773000] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc8e690) 00:25:15.951 [2024-11-20 06:35:47.773010] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.951 [2024-11-20 06:35:47.773031] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0880, cid 5, qid 0 00:25:15.951 [2024-11-20 06:35:47.773127] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.951 [2024-11-20 06:35:47.773140] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.951 [2024-11-20 06:35:47.773147] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773153] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0880) on tqpair=0xc8e690 00:25:15.951 [2024-11-20 06:35:47.773168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773177] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc8e690) 00:25:15.951 [2024-11-20 06:35:47.773187] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.951 [2024-11-20 06:35:47.773208] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0880, cid 5, qid 0 00:25:15.951 [2024-11-20 06:35:47.773291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.951 [2024-11-20 06:35:47.773316] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.951 [2024-11-20 06:35:47.773325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773332] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0880) on tqpair=0xc8e690 00:25:15.951 [2024-11-20 06:35:47.773357] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773369] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xc8e690) 00:25:15.951 [2024-11-20 06:35:47.773380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.951 [2024-11-20 06:35:47.773393] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773401] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xc8e690) 00:25:15.951 [2024-11-20 06:35:47.773410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.951 [2024-11-20 06:35:47.773422] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xc8e690) 00:25:15.951 [2024-11-20 06:35:47.773439] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.951 [2024-11-20 06:35:47.773451] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773458] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc8e690) 00:25:15.951 [2024-11-20 06:35:47.773468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.951 [2024-11-20 06:35:47.773490] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0880, cid 5, qid 0 00:25:15.951 [2024-11-20 06:35:47.773501] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0700, cid 4, qid 0 00:25:15.951 [2024-11-20 06:35:47.773509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0a00, cid 6, qid 0 00:25:15.951 [2024-11-20 06:35:47.773517] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0b80, cid 7, qid 0 00:25:15.951 [2024-11-20 06:35:47.773687] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.951 [2024-11-20 06:35:47.773704] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.951 [2024-11-20 06:35:47.773711] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773718] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8e690): datao=0, datal=8192, cccid=5 00:25:15.951 [2024-11-20 06:35:47.773725] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf0880) on tqpair(0xc8e690): expected_datao=0, payload_size=8192 00:25:15.951 [2024-11-20 06:35:47.773736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773756] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773765] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773778] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.951 [2024-11-20 06:35:47.773788] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.951 [2024-11-20 06:35:47.773794] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773800] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8e690): datao=0, datal=512, cccid=4 00:25:15.951 [2024-11-20 06:35:47.773808] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf0700) on tqpair(0xc8e690): expected_datao=0, payload_size=512 00:25:15.951 [2024-11-20 06:35:47.773815] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773824] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773834] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.951 [2024-11-20 06:35:47.773844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.951 [2024-11-20 06:35:47.773853] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.952 [2024-11-20 06:35:47.773859] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.773865] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8e690): datao=0, datal=512, cccid=6 00:25:15.952 [2024-11-20 06:35:47.773872] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf0a00) on tqpair(0xc8e690): expected_datao=0, payload_size=512 00:25:15.952 [2024-11-20 06:35:47.773880] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.773889] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.773895] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.773903] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:15.952 [2024-11-20 06:35:47.773912] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:15.952 [2024-11-20 06:35:47.773919] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.773925] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xc8e690): datao=0, datal=4096, cccid=7 00:25:15.952 [2024-11-20 06:35:47.773932] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xcf0b80) on tqpair(0xc8e690): expected_datao=0, payload_size=4096 00:25:15.952 [2024-11-20 06:35:47.773939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.773948] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.773956] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.773964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.952 [2024-11-20 06:35:47.773973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.952 [2024-11-20 06:35:47.773979] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.773985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0880) on tqpair=0xc8e690 00:25:15.952 [2024-11-20 06:35:47.774006] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.952 [2024-11-20 06:35:47.774018] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.952 [2024-11-20 06:35:47.774025] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.774047] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0700) on tqpair=0xc8e690 00:25:15.952 [2024-11-20 06:35:47.774063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.952 [2024-11-20 06:35:47.774073] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.952 [2024-11-20 06:35:47.774080] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.774086] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0a00) on tqpair=0xc8e690 00:25:15.952 [2024-11-20 06:35:47.774096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.952 [2024-11-20 06:35:47.774105] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.952 [2024-11-20 06:35:47.774112] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.952 [2024-11-20 06:35:47.774118] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0b80) on tqpair=0xc8e690 00:25:15.952 ===================================================== 00:25:15.952 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:15.952 ===================================================== 00:25:15.952 Controller Capabilities/Features 00:25:15.952 ================================ 00:25:15.952 Vendor ID: 8086 00:25:15.952 Subsystem Vendor ID: 8086 00:25:15.952 Serial Number: SPDK00000000000001 00:25:15.952 Model Number: SPDK bdev Controller 00:25:15.952 Firmware Version: 25.01 00:25:15.952 Recommended Arb Burst: 6 00:25:15.952 IEEE OUI Identifier: e4 d2 5c 00:25:15.952 Multi-path I/O 00:25:15.952 May have multiple subsystem ports: Yes 00:25:15.952 May have multiple controllers: Yes 00:25:15.952 Associated with SR-IOV VF: No 00:25:15.952 Max Data Transfer Size: 131072 00:25:15.952 Max Number of Namespaces: 32 00:25:15.952 Max Number of I/O Queues: 127 00:25:15.952 NVMe Specification Version (VS): 1.3 00:25:15.952 NVMe Specification Version (Identify): 1.3 00:25:15.952 Maximum Queue Entries: 128 00:25:15.952 Contiguous Queues Required: Yes 00:25:15.952 Arbitration Mechanisms Supported 00:25:15.952 Weighted Round Robin: Not Supported 00:25:15.952 Vendor Specific: Not Supported 00:25:15.952 Reset Timeout: 15000 ms 00:25:15.952 Doorbell Stride: 4 bytes 00:25:15.952 NVM Subsystem Reset: Not Supported 00:25:15.952 Command Sets Supported 00:25:15.952 NVM Command Set: Supported 00:25:15.952 Boot Partition: Not Supported 00:25:15.952 Memory Page Size Minimum: 4096 bytes 00:25:15.952 Memory Page Size Maximum: 4096 bytes 00:25:15.952 Persistent Memory Region: Not Supported 00:25:15.952 Optional Asynchronous Events Supported 00:25:15.952 Namespace Attribute Notices: Supported 00:25:15.952 Firmware Activation Notices: Not Supported 00:25:15.952 ANA Change Notices: Not Supported 00:25:15.952 PLE Aggregate Log Change Notices: Not Supported 00:25:15.952 LBA Status Info Alert Notices: Not Supported 00:25:15.952 EGE Aggregate Log Change Notices: Not Supported 00:25:15.952 Normal NVM Subsystem Shutdown event: Not Supported 00:25:15.952 Zone Descriptor Change Notices: Not Supported 00:25:15.952 Discovery Log Change Notices: Not Supported 00:25:15.952 Controller Attributes 00:25:15.952 128-bit Host Identifier: Supported 00:25:15.952 Non-Operational Permissive Mode: Not Supported 00:25:15.952 NVM Sets: Not Supported 00:25:15.952 Read Recovery Levels: Not Supported 00:25:15.952 Endurance Groups: Not Supported 00:25:15.952 Predictable Latency Mode: Not Supported 00:25:15.952 Traffic Based Keep ALive: Not Supported 00:25:15.952 Namespace Granularity: Not Supported 00:25:15.952 SQ Associations: Not Supported 00:25:15.952 UUID List: Not Supported 00:25:15.952 Multi-Domain Subsystem: Not Supported 00:25:15.952 Fixed Capacity Management: Not Supported 00:25:15.952 Variable Capacity Management: Not Supported 00:25:15.952 Delete Endurance Group: Not Supported 00:25:15.952 Delete NVM Set: Not Supported 00:25:15.952 Extended LBA Formats Supported: Not Supported 00:25:15.952 Flexible Data Placement Supported: Not Supported 00:25:15.952 00:25:15.952 Controller Memory Buffer Support 00:25:15.952 ================================ 00:25:15.952 Supported: No 00:25:15.952 00:25:15.952 Persistent Memory Region Support 00:25:15.952 ================================ 00:25:15.952 Supported: No 00:25:15.952 00:25:15.952 Admin Command Set Attributes 00:25:15.952 ============================ 00:25:15.952 Security Send/Receive: Not Supported 00:25:15.952 Format NVM: Not Supported 00:25:15.952 Firmware Activate/Download: Not Supported 00:25:15.952 Namespace Management: Not Supported 00:25:15.952 Device Self-Test: Not Supported 00:25:15.952 Directives: Not Supported 00:25:15.952 NVMe-MI: Not Supported 00:25:15.952 Virtualization Management: Not Supported 00:25:15.952 Doorbell Buffer Config: Not Supported 00:25:15.952 Get LBA Status Capability: Not Supported 00:25:15.952 Command & Feature Lockdown Capability: Not Supported 00:25:15.952 Abort Command Limit: 4 00:25:15.952 Async Event Request Limit: 4 00:25:15.952 Number of Firmware Slots: N/A 00:25:15.952 Firmware Slot 1 Read-Only: N/A 00:25:15.952 Firmware Activation Without Reset: N/A 00:25:15.952 Multiple Update Detection Support: N/A 00:25:15.952 Firmware Update Granularity: No Information Provided 00:25:15.952 Per-Namespace SMART Log: No 00:25:15.952 Asymmetric Namespace Access Log Page: Not Supported 00:25:15.952 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:15.952 Command Effects Log Page: Supported 00:25:15.952 Get Log Page Extended Data: Supported 00:25:15.952 Telemetry Log Pages: Not Supported 00:25:15.952 Persistent Event Log Pages: Not Supported 00:25:15.952 Supported Log Pages Log Page: May Support 00:25:15.952 Commands Supported & Effects Log Page: Not Supported 00:25:15.952 Feature Identifiers & Effects Log Page:May Support 00:25:15.952 NVMe-MI Commands & Effects Log Page: May Support 00:25:15.952 Data Area 4 for Telemetry Log: Not Supported 00:25:15.952 Error Log Page Entries Supported: 128 00:25:15.952 Keep Alive: Supported 00:25:15.952 Keep Alive Granularity: 10000 ms 00:25:15.952 00:25:15.952 NVM Command Set Attributes 00:25:15.952 ========================== 00:25:15.952 Submission Queue Entry Size 00:25:15.952 Max: 64 00:25:15.952 Min: 64 00:25:15.952 Completion Queue Entry Size 00:25:15.952 Max: 16 00:25:15.952 Min: 16 00:25:15.953 Number of Namespaces: 32 00:25:15.953 Compare Command: Supported 00:25:15.953 Write Uncorrectable Command: Not Supported 00:25:15.953 Dataset Management Command: Supported 00:25:15.953 Write Zeroes Command: Supported 00:25:15.953 Set Features Save Field: Not Supported 00:25:15.953 Reservations: Supported 00:25:15.953 Timestamp: Not Supported 00:25:15.953 Copy: Supported 00:25:15.953 Volatile Write Cache: Present 00:25:15.953 Atomic Write Unit (Normal): 1 00:25:15.953 Atomic Write Unit (PFail): 1 00:25:15.953 Atomic Compare & Write Unit: 1 00:25:15.953 Fused Compare & Write: Supported 00:25:15.953 Scatter-Gather List 00:25:15.953 SGL Command Set: Supported 00:25:15.953 SGL Keyed: Supported 00:25:15.953 SGL Bit Bucket Descriptor: Not Supported 00:25:15.953 SGL Metadata Pointer: Not Supported 00:25:15.953 Oversized SGL: Not Supported 00:25:15.953 SGL Metadata Address: Not Supported 00:25:15.953 SGL Offset: Supported 00:25:15.953 Transport SGL Data Block: Not Supported 00:25:15.953 Replay Protected Memory Block: Not Supported 00:25:15.953 00:25:15.953 Firmware Slot Information 00:25:15.953 ========================= 00:25:15.953 Active slot: 1 00:25:15.953 Slot 1 Firmware Revision: 25.01 00:25:15.953 00:25:15.953 00:25:15.953 Commands Supported and Effects 00:25:15.953 ============================== 00:25:15.953 Admin Commands 00:25:15.953 -------------- 00:25:15.953 Get Log Page (02h): Supported 00:25:15.953 Identify (06h): Supported 00:25:15.953 Abort (08h): Supported 00:25:15.953 Set Features (09h): Supported 00:25:15.953 Get Features (0Ah): Supported 00:25:15.953 Asynchronous Event Request (0Ch): Supported 00:25:15.953 Keep Alive (18h): Supported 00:25:15.953 I/O Commands 00:25:15.953 ------------ 00:25:15.953 Flush (00h): Supported LBA-Change 00:25:15.953 Write (01h): Supported LBA-Change 00:25:15.953 Read (02h): Supported 00:25:15.953 Compare (05h): Supported 00:25:15.953 Write Zeroes (08h): Supported LBA-Change 00:25:15.953 Dataset Management (09h): Supported LBA-Change 00:25:15.953 Copy (19h): Supported LBA-Change 00:25:15.953 00:25:15.953 Error Log 00:25:15.953 ========= 00:25:15.953 00:25:15.953 Arbitration 00:25:15.953 =========== 00:25:15.953 Arbitration Burst: 1 00:25:15.953 00:25:15.953 Power Management 00:25:15.953 ================ 00:25:15.953 Number of Power States: 1 00:25:15.953 Current Power State: Power State #0 00:25:15.953 Power State #0: 00:25:15.953 Max Power: 0.00 W 00:25:15.953 Non-Operational State: Operational 00:25:15.953 Entry Latency: Not Reported 00:25:15.953 Exit Latency: Not Reported 00:25:15.953 Relative Read Throughput: 0 00:25:15.953 Relative Read Latency: 0 00:25:15.953 Relative Write Throughput: 0 00:25:15.953 Relative Write Latency: 0 00:25:15.953 Idle Power: Not Reported 00:25:15.953 Active Power: Not Reported 00:25:15.953 Non-Operational Permissive Mode: Not Supported 00:25:15.953 00:25:15.953 Health Information 00:25:15.953 ================== 00:25:15.953 Critical Warnings: 00:25:15.953 Available Spare Space: OK 00:25:15.953 Temperature: OK 00:25:15.953 Device Reliability: OK 00:25:15.953 Read Only: No 00:25:15.953 Volatile Memory Backup: OK 00:25:15.953 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:15.953 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:15.953 Available Spare: 0% 00:25:15.953 Available Spare Threshold: 0% 00:25:15.953 Life Percentage Used:[2024-11-20 06:35:47.774228] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.774240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xc8e690) 00:25:15.953 [2024-11-20 06:35:47.774251] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.953 [2024-11-20 06:35:47.774273] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0b80, cid 7, qid 0 00:25:15.953 [2024-11-20 06:35:47.774413] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.953 [2024-11-20 06:35:47.774432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.953 [2024-11-20 06:35:47.774440] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.774447] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0b80) on tqpair=0xc8e690 00:25:15.953 [2024-11-20 06:35:47.774490] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:15.953 [2024-11-20 06:35:47.774509] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0100) on tqpair=0xc8e690 00:25:15.953 [2024-11-20 06:35:47.774521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.953 [2024-11-20 06:35:47.774530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0280) on tqpair=0xc8e690 00:25:15.953 [2024-11-20 06:35:47.774537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.953 [2024-11-20 06:35:47.774545] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0400) on tqpair=0xc8e690 00:25:15.953 [2024-11-20 06:35:47.774553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.953 [2024-11-20 06:35:47.774560] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0580) on tqpair=0xc8e690 00:25:15.953 [2024-11-20 06:35:47.774568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:15.953 [2024-11-20 06:35:47.774580] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.774588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.774594] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8e690) 00:25:15.953 [2024-11-20 06:35:47.774605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.953 [2024-11-20 06:35:47.774628] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0580, cid 3, qid 0 00:25:15.953 [2024-11-20 06:35:47.774751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.953 [2024-11-20 06:35:47.774766] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.953 [2024-11-20 06:35:47.774773] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.774779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0580) on tqpair=0xc8e690 00:25:15.953 [2024-11-20 06:35:47.774790] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.774798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.774804] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8e690) 00:25:15.953 [2024-11-20 06:35:47.774815] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.953 [2024-11-20 06:35:47.774841] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0580, cid 3, qid 0 00:25:15.953 [2024-11-20 06:35:47.774959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.953 [2024-11-20 06:35:47.774973] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.953 [2024-11-20 06:35:47.774980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.774986] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0580) on tqpair=0xc8e690 00:25:15.953 [2024-11-20 06:35:47.774994] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:15.953 [2024-11-20 06:35:47.775002] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:15.953 [2024-11-20 06:35:47.775017] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.775025] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.775032] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8e690) 00:25:15.953 [2024-11-20 06:35:47.775046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.953 [2024-11-20 06:35:47.775068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0580, cid 3, qid 0 00:25:15.953 [2024-11-20 06:35:47.775151] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:15.953 [2024-11-20 06:35:47.775165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:15.953 [2024-11-20 06:35:47.775172] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.775179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0580) on tqpair=0xc8e690 00:25:15.953 [2024-11-20 06:35:47.775195] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.775203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:15.953 [2024-11-20 06:35:47.775210] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8e690) 00:25:15.953 [2024-11-20 06:35:47.775220] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:15.953 [2024-11-20 06:35:47.775241] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0580, cid 3, qid 0 00:25:16.212 [2024-11-20 06:35:47.779318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:16.212 [2024-11-20 06:35:47.779334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:16.212 [2024-11-20 06:35:47.779342] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:16.212 [2024-11-20 06:35:47.779348] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0580) on tqpair=0xc8e690 00:25:16.212 [2024-11-20 06:35:47.779380] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:16.212 [2024-11-20 06:35:47.779390] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:16.212 [2024-11-20 06:35:47.779397] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xc8e690) 00:25:16.212 [2024-11-20 06:35:47.779408] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:16.212 [2024-11-20 06:35:47.779430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xcf0580, cid 3, qid 0 00:25:16.212 [2024-11-20 06:35:47.779553] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:16.212 [2024-11-20 06:35:47.779565] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:16.212 [2024-11-20 06:35:47.779572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:16.212 [2024-11-20 06:35:47.779579] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xcf0580) on tqpair=0xc8e690 00:25:16.212 [2024-11-20 06:35:47.779591] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:25:16.212 0% 00:25:16.212 Data Units Read: 0 00:25:16.212 Data Units Written: 0 00:25:16.212 Host Read Commands: 0 00:25:16.212 Host Write Commands: 0 00:25:16.212 Controller Busy Time: 0 minutes 00:25:16.212 Power Cycles: 0 00:25:16.212 Power On Hours: 0 hours 00:25:16.212 Unsafe Shutdowns: 0 00:25:16.212 Unrecoverable Media Errors: 0 00:25:16.212 Lifetime Error Log Entries: 0 00:25:16.212 Warning Temperature Time: 0 minutes 00:25:16.212 Critical Temperature Time: 0 minutes 00:25:16.212 00:25:16.212 Number of Queues 00:25:16.212 ================ 00:25:16.212 Number of I/O Submission Queues: 127 00:25:16.212 Number of I/O Completion Queues: 127 00:25:16.212 00:25:16.212 Active Namespaces 00:25:16.212 ================= 00:25:16.212 Namespace ID:1 00:25:16.212 Error Recovery Timeout: Unlimited 00:25:16.212 Command Set Identifier: NVM (00h) 00:25:16.212 Deallocate: Supported 00:25:16.212 Deallocated/Unwritten Error: Not Supported 00:25:16.212 Deallocated Read Value: Unknown 00:25:16.212 Deallocate in Write Zeroes: Not Supported 00:25:16.212 Deallocated Guard Field: 0xFFFF 00:25:16.212 Flush: Supported 00:25:16.212 Reservation: Supported 00:25:16.212 Namespace Sharing Capabilities: Multiple Controllers 00:25:16.212 Size (in LBAs): 131072 (0GiB) 00:25:16.212 Capacity (in LBAs): 131072 (0GiB) 00:25:16.212 Utilization (in LBAs): 131072 (0GiB) 00:25:16.212 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:16.212 EUI64: ABCDEF0123456789 00:25:16.212 UUID: a22f35c4-8eca-4319-86cc-717f56ab0c74 00:25:16.212 Thin Provisioning: Not Supported 00:25:16.212 Per-NS Atomic Units: Yes 00:25:16.212 Atomic Boundary Size (Normal): 0 00:25:16.212 Atomic Boundary Size (PFail): 0 00:25:16.212 Atomic Boundary Offset: 0 00:25:16.212 Maximum Single Source Range Length: 65535 00:25:16.212 Maximum Copy Length: 65535 00:25:16.212 Maximum Source Range Count: 1 00:25:16.212 NGUID/EUI64 Never Reused: No 00:25:16.212 Namespace Write Protected: No 00:25:16.212 Number of LBA Formats: 1 00:25:16.212 Current LBA Format: LBA Format #00 00:25:16.212 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:16.212 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:16.212 rmmod nvme_tcp 00:25:16.212 rmmod nvme_fabrics 00:25:16.212 rmmod nvme_keyring 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 2153459 ']' 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 2153459 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # '[' -z 2153459 ']' 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # kill -0 2153459 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # uname 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2153459 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2153459' 00:25:16.212 killing process with pid 2153459 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@971 -- # kill 2153459 00:25:16.212 06:35:47 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@976 -- # wait 2153459 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:16.478 06:35:48 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.385 06:35:50 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:18.385 00:25:18.385 real 0m5.726s 00:25:18.385 user 0m5.066s 00:25:18.385 sys 0m2.012s 00:25:18.385 06:35:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:18.385 06:35:50 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:18.385 ************************************ 00:25:18.385 END TEST nvmf_identify 00:25:18.385 ************************************ 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.644 ************************************ 00:25:18.644 START TEST nvmf_perf 00:25:18.644 ************************************ 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:18.644 * Looking for test storage... 00:25:18.644 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lcov --version 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.644 --rc genhtml_branch_coverage=1 00:25:18.644 --rc genhtml_function_coverage=1 00:25:18.644 --rc genhtml_legend=1 00:25:18.644 --rc geninfo_all_blocks=1 00:25:18.644 --rc geninfo_unexecuted_blocks=1 00:25:18.644 00:25:18.644 ' 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.644 --rc genhtml_branch_coverage=1 00:25:18.644 --rc genhtml_function_coverage=1 00:25:18.644 --rc genhtml_legend=1 00:25:18.644 --rc geninfo_all_blocks=1 00:25:18.644 --rc geninfo_unexecuted_blocks=1 00:25:18.644 00:25:18.644 ' 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.644 --rc genhtml_branch_coverage=1 00:25:18.644 --rc genhtml_function_coverage=1 00:25:18.644 --rc genhtml_legend=1 00:25:18.644 --rc geninfo_all_blocks=1 00:25:18.644 --rc geninfo_unexecuted_blocks=1 00:25:18.644 00:25:18.644 ' 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:18.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:18.644 --rc genhtml_branch_coverage=1 00:25:18.644 --rc genhtml_function_coverage=1 00:25:18.644 --rc genhtml_legend=1 00:25:18.644 --rc geninfo_all_blocks=1 00:25:18.644 --rc geninfo_unexecuted_blocks=1 00:25:18.644 00:25:18.644 ' 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:18.644 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:18.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:25:18.645 06:35:50 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:21.182 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:21.183 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:21.183 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:21.183 Found net devices under 0000:09:00.0: cvl_0_0 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:21.183 Found net devices under 0000:09:00.1: cvl_0_1 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:21.183 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:21.183 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.319 ms 00:25:21.183 00:25:21.183 --- 10.0.0.2 ping statistics --- 00:25:21.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.183 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:21.183 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:21.183 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.074 ms 00:25:21.183 00:25:21.183 --- 10.0.0.1 ping statistics --- 00:25:21.183 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:21.183 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=2155610 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 2155610 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # '[' -z 2155610 ']' 00:25:21.183 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:21.184 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:21.184 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:21.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:21.184 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:21.184 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:21.184 [2024-11-20 06:35:52.728741] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:25:21.184 [2024-11-20 06:35:52.728827] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:21.184 [2024-11-20 06:35:52.802035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:21.184 [2024-11-20 06:35:52.856856] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:21.184 [2024-11-20 06:35:52.856909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:21.184 [2024-11-20 06:35:52.856936] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:21.184 [2024-11-20 06:35:52.856948] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:21.184 [2024-11-20 06:35:52.856957] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:21.184 [2024-11-20 06:35:52.858695] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:21.184 [2024-11-20 06:35:52.858761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.184 [2024-11-20 06:35:52.858828] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:21.184 [2024-11-20 06:35:52.858832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:21.184 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:21.184 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@866 -- # return 0 00:25:21.184 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:21.184 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:21.184 06:35:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:21.184 06:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:21.184 06:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:21.184 06:35:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:24.465 06:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:24.465 06:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:24.722 06:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:0b:00.0 00:25:24.722 06:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:25.006 06:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:25.006 06:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:0b:00.0 ']' 00:25:25.006 06:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:25.006 06:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:25.006 06:35:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:25.288 [2024-11-20 06:35:56.998530] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:25.288 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:25.547 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:25.547 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:25.804 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:25.804 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:26.062 06:35:57 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:26.320 [2024-11-20 06:35:58.081958] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:26.320 06:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:26.578 06:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:0b:00.0 ']' 00:25:26.578 06:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:25:26.578 06:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:26.578 06:35:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:0b:00.0' 00:25:27.952 Initializing NVMe Controllers 00:25:27.952 Attached to NVMe Controller at 0000:0b:00.0 [8086:0a54] 00:25:27.952 Associating PCIE (0000:0b:00.0) NSID 1 with lcore 0 00:25:27.952 Initialization complete. Launching workers. 00:25:27.952 ======================================================== 00:25:27.952 Latency(us) 00:25:27.952 Device Information : IOPS MiB/s Average min max 00:25:27.952 PCIE (0000:0b:00.0) NSID 1 from core 0: 85158.25 332.65 375.27 31.95 4750.78 00:25:27.952 ======================================================== 00:25:27.952 Total : 85158.25 332.65 375.27 31.95 4750.78 00:25:27.952 00:25:27.952 06:35:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:29.325 Initializing NVMe Controllers 00:25:29.325 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:29.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:29.325 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:29.325 Initialization complete. Launching workers. 00:25:29.325 ======================================================== 00:25:29.325 Latency(us) 00:25:29.325 Device Information : IOPS MiB/s Average min max 00:25:29.325 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 98.75 0.39 10247.17 146.33 45801.04 00:25:29.325 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.83 0.26 15201.11 6028.13 52819.31 00:25:29.325 ======================================================== 00:25:29.325 Total : 165.58 0.65 12246.65 146.33 52819.31 00:25:29.325 00:25:29.325 06:36:00 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:30.699 Initializing NVMe Controllers 00:25:30.699 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:30.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:30.699 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:30.699 Initialization complete. Launching workers. 00:25:30.699 ======================================================== 00:25:30.699 Latency(us) 00:25:30.699 Device Information : IOPS MiB/s Average min max 00:25:30.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8501.95 33.21 3765.47 703.76 9104.70 00:25:30.699 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3829.98 14.96 8400.37 5069.64 16939.49 00:25:30.699 ======================================================== 00:25:30.699 Total : 12331.93 48.17 5204.95 703.76 16939.49 00:25:30.699 00:25:30.699 06:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:30.699 06:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:30.699 06:36:02 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:33.983 Initializing NVMe Controllers 00:25:33.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.983 Controller IO queue size 128, less than required. 00:25:33.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.983 Controller IO queue size 128, less than required. 00:25:33.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:33.983 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:33.983 Initialization complete. Launching workers. 00:25:33.983 ======================================================== 00:25:33.983 Latency(us) 00:25:33.983 Device Information : IOPS MiB/s Average min max 00:25:33.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1659.09 414.77 78098.40 55098.78 142931.01 00:25:33.983 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 598.45 149.61 225843.90 77792.06 376911.23 00:25:33.983 ======================================================== 00:25:33.983 Total : 2257.54 564.39 117264.19 55098.78 376911.23 00:25:33.983 00:25:33.983 06:36:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:33.983 No valid NVMe controllers or AIO or URING devices found 00:25:33.983 Initializing NVMe Controllers 00:25:33.983 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.983 Controller IO queue size 128, less than required. 00:25:33.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.983 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:33.983 Controller IO queue size 128, less than required. 00:25:33.983 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:33.983 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:33.983 WARNING: Some requested NVMe devices were skipped 00:25:33.983 06:36:05 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:36.510 Initializing NVMe Controllers 00:25:36.510 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:36.510 Controller IO queue size 128, less than required. 00:25:36.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:36.510 Controller IO queue size 128, less than required. 00:25:36.510 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:36.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:36.510 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:36.510 Initialization complete. Launching workers. 00:25:36.510 00:25:36.510 ==================== 00:25:36.510 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:36.510 TCP transport: 00:25:36.510 polls: 9040 00:25:36.510 idle_polls: 5798 00:25:36.510 sock_completions: 3242 00:25:36.510 nvme_completions: 6207 00:25:36.511 submitted_requests: 9306 00:25:36.511 queued_requests: 1 00:25:36.511 00:25:36.511 ==================== 00:25:36.511 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:36.511 TCP transport: 00:25:36.511 polls: 26878 00:25:36.511 idle_polls: 23573 00:25:36.511 sock_completions: 3305 00:25:36.511 nvme_completions: 6341 00:25:36.511 submitted_requests: 9504 00:25:36.511 queued_requests: 1 00:25:36.511 ======================================================== 00:25:36.511 Latency(us) 00:25:36.511 Device Information : IOPS MiB/s Average min max 00:25:36.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1550.44 387.61 84185.88 55238.87 133906.23 00:25:36.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1583.91 395.98 81518.34 54244.71 145185.37 00:25:36.511 ======================================================== 00:25:36.511 Total : 3134.35 783.59 82837.87 54244.71 145185.37 00:25:36.511 00:25:36.511 06:36:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:36.511 06:36:07 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:36.511 rmmod nvme_tcp 00:25:36.511 rmmod nvme_fabrics 00:25:36.511 rmmod nvme_keyring 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 2155610 ']' 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 2155610 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # '[' -z 2155610 ']' 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # kill -0 2155610 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # uname 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2155610 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2155610' 00:25:36.511 killing process with pid 2155610 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@971 -- # kill 2155610 00:25:36.511 06:36:08 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@976 -- # wait 2155610 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:38.413 06:36:09 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:40.318 00:25:40.318 real 0m21.603s 00:25:40.318 user 1m6.968s 00:25:40.318 sys 0m5.537s 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:40.318 ************************************ 00:25:40.318 END TEST nvmf_perf 00:25:40.318 ************************************ 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.318 ************************************ 00:25:40.318 START TEST nvmf_fio_host 00:25:40.318 ************************************ 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:40.318 * Looking for test storage... 00:25:40.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lcov --version 00:25:40.318 06:36:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.318 --rc genhtml_branch_coverage=1 00:25:40.318 --rc genhtml_function_coverage=1 00:25:40.318 --rc genhtml_legend=1 00:25:40.318 --rc geninfo_all_blocks=1 00:25:40.318 --rc geninfo_unexecuted_blocks=1 00:25:40.318 00:25:40.318 ' 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.318 --rc genhtml_branch_coverage=1 00:25:40.318 --rc genhtml_function_coverage=1 00:25:40.318 --rc genhtml_legend=1 00:25:40.318 --rc geninfo_all_blocks=1 00:25:40.318 --rc geninfo_unexecuted_blocks=1 00:25:40.318 00:25:40.318 ' 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.318 --rc genhtml_branch_coverage=1 00:25:40.318 --rc genhtml_function_coverage=1 00:25:40.318 --rc genhtml_legend=1 00:25:40.318 --rc geninfo_all_blocks=1 00:25:40.318 --rc geninfo_unexecuted_blocks=1 00:25:40.318 00:25:40.318 ' 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:40.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:40.318 --rc genhtml_branch_coverage=1 00:25:40.318 --rc genhtml_function_coverage=1 00:25:40.318 --rc genhtml_legend=1 00:25:40.318 --rc geninfo_all_blocks=1 00:25:40.318 --rc geninfo_unexecuted_blocks=1 00:25:40.318 00:25:40.318 ' 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.318 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:40.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:25:40.319 06:36:12 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:42.853 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.853 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:42.854 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:42.854 Found net devices under 0000:09:00.0: cvl_0_0 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:42.854 Found net devices under 0000:09:00.1: cvl_0_1 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:42.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:42.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:25:42.854 00:25:42.854 --- 10.0.0.2 ping statistics --- 00:25:42.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.854 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:42.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:42.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.113 ms 00:25:42.854 00:25:42.854 --- 10.0.0.1 ping statistics --- 00:25:42.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:42.854 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:42.854 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2160254 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2160254 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # '[' -z 2160254 ']' 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:42.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.855 [2024-11-20 06:36:14.388109] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:25:42.855 [2024-11-20 06:36:14.388197] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:42.855 [2024-11-20 06:36:14.460200] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:42.855 [2024-11-20 06:36:14.516429] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:42.855 [2024-11-20 06:36:14.516491] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:42.855 [2024-11-20 06:36:14.516520] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:42.855 [2024-11-20 06:36:14.516531] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:42.855 [2024-11-20 06:36:14.516540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:42.855 [2024-11-20 06:36:14.517971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:42.855 [2024-11-20 06:36:14.518079] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:42.855 [2024-11-20 06:36:14.518165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:42.855 [2024-11-20 06:36:14.518168] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@866 -- # return 0 00:25:42.855 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:43.113 [2024-11-20 06:36:14.884269] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:43.113 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:43.113 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:43.113 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.113 06:36:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:43.680 Malloc1 00:25:43.680 06:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:43.680 06:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:43.938 06:36:15 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:44.196 [2024-11-20 06:36:16.009643] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:44.454 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:44.713 06:36:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:44.713 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:44.713 fio-3.35 00:25:44.713 Starting 1 thread 00:25:47.243 00:25:47.243 test: (groupid=0, jobs=1): err= 0: pid=2160614: Wed Nov 20 06:36:18 2024 00:25:47.243 read: IOPS=8796, BW=34.4MiB/s (36.0MB/s)(69.0MiB/2007msec) 00:25:47.243 slat (nsec): min=1927, max=114170, avg=2443.91, stdev=1398.19 00:25:47.243 clat (usec): min=2508, max=13446, avg=7906.70, stdev=709.23 00:25:47.243 lat (usec): min=2531, max=13449, avg=7909.15, stdev=709.16 00:25:47.243 clat percentiles (usec): 00:25:47.243 | 1.00th=[ 6390], 5.00th=[ 6849], 10.00th=[ 7046], 20.00th=[ 7308], 00:25:47.243 | 30.00th=[ 7570], 40.00th=[ 7701], 50.00th=[ 7898], 60.00th=[ 8094], 00:25:47.243 | 70.00th=[ 8225], 80.00th=[ 8455], 90.00th=[ 8848], 95.00th=[ 8979], 00:25:47.243 | 99.00th=[ 9634], 99.50th=[ 9765], 99.90th=[11600], 99.95th=[12518], 00:25:47.243 | 99.99th=[13435] 00:25:47.243 bw ( KiB/s): min=32872, max=36168, per=99.99%, avg=35182.00, stdev=1566.82, samples=4 00:25:47.244 iops : min= 8218, max= 9042, avg=8795.50, stdev=391.71, samples=4 00:25:47.244 write: IOPS=8805, BW=34.4MiB/s (36.1MB/s)(69.0MiB/2007msec); 0 zone resets 00:25:47.244 slat (usec): min=2, max=104, avg= 2.57, stdev= 1.22 00:25:47.244 clat (usec): min=986, max=13662, avg=6573.98, stdev=607.92 00:25:47.244 lat (usec): min=992, max=13665, avg=6576.55, stdev=607.90 00:25:47.244 clat percentiles (usec): 00:25:47.244 | 1.00th=[ 5276], 5.00th=[ 5669], 10.00th=[ 5866], 20.00th=[ 6128], 00:25:47.244 | 30.00th=[ 6259], 40.00th=[ 6456], 50.00th=[ 6587], 60.00th=[ 6718], 00:25:47.244 | 70.00th=[ 6849], 80.00th=[ 7046], 90.00th=[ 7242], 95.00th=[ 7504], 00:25:47.244 | 99.00th=[ 7832], 99.50th=[ 8094], 99.90th=[11076], 99.95th=[12518], 00:25:47.244 | 99.99th=[13698] 00:25:47.244 bw ( KiB/s): min=33776, max=36224, per=99.99%, avg=35220.00, stdev=1138.84, samples=4 00:25:47.244 iops : min= 8444, max= 9056, avg=8805.00, stdev=284.71, samples=4 00:25:47.244 lat (usec) : 1000=0.01% 00:25:47.244 lat (msec) : 2=0.03%, 4=0.11%, 10=99.62%, 20=0.24% 00:25:47.244 cpu : usr=64.41%, sys=34.00%, ctx=79, majf=0, minf=32 00:25:47.244 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:25:47.244 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:47.244 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:47.244 issued rwts: total=17655,17673,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:47.244 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:47.244 00:25:47.244 Run status group 0 (all jobs): 00:25:47.244 READ: bw=34.4MiB/s (36.0MB/s), 34.4MiB/s-34.4MiB/s (36.0MB/s-36.0MB/s), io=69.0MiB (72.3MB), run=2007-2007msec 00:25:47.244 WRITE: bw=34.4MiB/s (36.1MB/s), 34.4MiB/s-34.4MiB/s (36.1MB/s-36.1MB/s), io=69.0MiB (72.4MB), run=2007-2007msec 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1362 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local sanitizers 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # shift 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # local asan_lib= 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libasan 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # asan_lib= 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:47.244 06:36:18 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:47.502 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:47.502 fio-3.35 00:25:47.502 Starting 1 thread 00:25:50.035 00:25:50.035 test: (groupid=0, jobs=1): err= 0: pid=2160949: Wed Nov 20 06:36:21 2024 00:25:50.035 read: IOPS=7759, BW=121MiB/s (127MB/s)(243MiB/2008msec) 00:25:50.035 slat (usec): min=2, max=129, avg= 3.69, stdev= 1.74 00:25:50.035 clat (usec): min=2011, max=18439, avg=9287.21, stdev=2218.39 00:25:50.035 lat (usec): min=2015, max=18444, avg=9290.90, stdev=2218.50 00:25:50.035 clat percentiles (usec): 00:25:50.035 | 1.00th=[ 4883], 5.00th=[ 5932], 10.00th=[ 6587], 20.00th=[ 7308], 00:25:50.035 | 30.00th=[ 8029], 40.00th=[ 8586], 50.00th=[ 9241], 60.00th=[ 9765], 00:25:50.035 | 70.00th=[10421], 80.00th=[10945], 90.00th=[12256], 95.00th=[13435], 00:25:50.035 | 99.00th=[14877], 99.50th=[15401], 99.90th=[16188], 99.95th=[17433], 00:25:50.035 | 99.99th=[18220] 00:25:50.035 bw ( KiB/s): min=58112, max=69344, per=51.87%, avg=64392.00, stdev=5535.56, samples=4 00:25:50.035 iops : min= 3632, max= 4334, avg=4024.50, stdev=345.97, samples=4 00:25:50.035 write: IOPS=4579, BW=71.6MiB/s (75.0MB/s)(132MiB/1842msec); 0 zone resets 00:25:50.035 slat (usec): min=30, max=202, avg=33.38, stdev= 5.64 00:25:50.035 clat (usec): min=6940, max=22521, avg=12476.67, stdev=2446.10 00:25:50.035 lat (usec): min=6973, max=22564, avg=12510.05, stdev=2446.93 00:25:50.035 clat percentiles (usec): 00:25:50.035 | 1.00th=[ 7963], 5.00th=[ 8979], 10.00th=[ 9634], 20.00th=[10421], 00:25:50.035 | 30.00th=[11076], 40.00th=[11600], 50.00th=[12125], 60.00th=[12911], 00:25:50.035 | 70.00th=[13566], 80.00th=[14484], 90.00th=[15664], 95.00th=[16909], 00:25:50.035 | 99.00th=[19268], 99.50th=[20055], 99.90th=[21890], 99.95th=[22152], 00:25:50.035 | 99.99th=[22414] 00:25:50.035 bw ( KiB/s): min=60192, max=72128, per=91.34%, avg=66920.00, stdev=5846.54, samples=4 00:25:50.035 iops : min= 3762, max= 4508, avg=4182.50, stdev=365.41, samples=4 00:25:50.035 lat (msec) : 4=0.16%, 10=47.26%, 20=52.39%, 50=0.18% 00:25:50.035 cpu : usr=76.58%, sys=22.12%, ctx=50, majf=0, minf=54 00:25:50.035 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:50.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:50.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:50.035 issued rwts: total=15581,8435,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:50.035 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:50.035 00:25:50.035 Run status group 0 (all jobs): 00:25:50.035 READ: bw=121MiB/s (127MB/s), 121MiB/s-121MiB/s (127MB/s-127MB/s), io=243MiB (255MB), run=2008-2008msec 00:25:50.035 WRITE: bw=71.6MiB/s (75.0MB/s), 71.6MiB/s-71.6MiB/s (75.0MB/s-75.0MB/s), io=132MiB (138MB), run=1842-1842msec 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:50.035 rmmod nvme_tcp 00:25:50.035 rmmod nvme_fabrics 00:25:50.035 rmmod nvme_keyring 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 2160254 ']' 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 2160254 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # '[' -z 2160254 ']' 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # kill -0 2160254 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # uname 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2160254 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2160254' 00:25:50.035 killing process with pid 2160254 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@971 -- # kill 2160254 00:25:50.035 06:36:21 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@976 -- # wait 2160254 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:50.293 06:36:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.831 06:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:52.831 00:25:52.831 real 0m12.196s 00:25:52.831 user 0m35.606s 00:25:52.831 sys 0m4.116s 00:25:52.831 06:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:25:52.831 06:36:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.831 ************************************ 00:25:52.831 END TEST nvmf_fio_host 00:25:52.831 ************************************ 00:25:52.831 06:36:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:52.831 06:36:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:25:52.831 06:36:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:25:52.831 06:36:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.831 ************************************ 00:25:52.831 START TEST nvmf_failover 00:25:52.831 ************************************ 00:25:52.831 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:52.831 * Looking for test storage... 00:25:52.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lcov --version 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:25:52.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.832 --rc genhtml_branch_coverage=1 00:25:52.832 --rc genhtml_function_coverage=1 00:25:52.832 --rc genhtml_legend=1 00:25:52.832 --rc geninfo_all_blocks=1 00:25:52.832 --rc geninfo_unexecuted_blocks=1 00:25:52.832 00:25:52.832 ' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:25:52.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.832 --rc genhtml_branch_coverage=1 00:25:52.832 --rc genhtml_function_coverage=1 00:25:52.832 --rc genhtml_legend=1 00:25:52.832 --rc geninfo_all_blocks=1 00:25:52.832 --rc geninfo_unexecuted_blocks=1 00:25:52.832 00:25:52.832 ' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:25:52.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.832 --rc genhtml_branch_coverage=1 00:25:52.832 --rc genhtml_function_coverage=1 00:25:52.832 --rc genhtml_legend=1 00:25:52.832 --rc geninfo_all_blocks=1 00:25:52.832 --rc geninfo_unexecuted_blocks=1 00:25:52.832 00:25:52.832 ' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:25:52.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:52.832 --rc genhtml_branch_coverage=1 00:25:52.832 --rc genhtml_function_coverage=1 00:25:52.832 --rc genhtml_legend=1 00:25:52.832 --rc geninfo_all_blocks=1 00:25:52.832 --rc geninfo_unexecuted_blocks=1 00:25:52.832 00:25:52.832 ' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:52.832 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:52.832 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:25:52.833 06:36:24 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:25:54.736 Found 0000:09:00.0 (0x8086 - 0x159b) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:25:54.736 Found 0000:09:00.1 (0x8086 - 0x159b) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:25:54.736 Found net devices under 0000:09:00.0: cvl_0_0 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:25:54.736 Found net devices under 0000:09:00.1: cvl_0_1 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:54.736 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:54.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:54.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:25:54.995 00:25:54.995 --- 10.0.0.2 ping statistics --- 00:25:54.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.995 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:54.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:54.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:25:54.995 00:25:54.995 --- 10.0.0.1 ping statistics --- 00:25:54.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:54.995 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=2163270 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 2163270 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2163270 ']' 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:54.995 06:36:26 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:54.995 [2024-11-20 06:36:26.766263] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:25:54.995 [2024-11-20 06:36:26.766378] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:55.284 [2024-11-20 06:36:26.845141] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:55.284 [2024-11-20 06:36:26.904401] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.284 [2024-11-20 06:36:26.904454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.284 [2024-11-20 06:36:26.904477] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.284 [2024-11-20 06:36:26.904488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.284 [2024-11-20 06:36:26.904498] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.284 [2024-11-20 06:36:26.907325] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:55.284 [2024-11-20 06:36:26.907396] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:55.284 [2024-11-20 06:36:26.907400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:55.284 06:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:55.284 06:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:55.284 06:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:55.284 06:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:55.284 06:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:55.284 06:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.284 06:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:55.583 [2024-11-20 06:36:27.291383] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.583 06:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:25:55.841 Malloc0 00:25:55.841 06:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:56.099 06:36:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:56.356 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:56.614 [2024-11-20 06:36:28.391998] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:56.614 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:25:56.871 [2024-11-20 06:36:28.656894] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:56.871 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:25:57.129 [2024-11-20 06:36:28.941907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:25:57.386 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2163560 00:25:57.386 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:25:57.386 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:57.386 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2163560 /var/tmp/bdevperf.sock 00:25:57.386 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2163560 ']' 00:25:57.386 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:57.386 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:25:57.386 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:57.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:57.386 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:25:57.386 06:36:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:25:57.643 06:36:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:25:57.643 06:36:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:25:57.643 06:36:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:57.900 NVMe0n1 00:25:57.900 06:36:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:25:58.158 00:25:58.158 06:36:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2163664 00:25:58.158 06:36:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:58.158 06:36:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:25:59.532 06:36:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:59.532 06:36:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:02.814 06:36:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:03.072 00:26:03.072 06:36:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:03.330 06:36:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:06.612 06:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.612 [2024-11-20 06:36:38.329977] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.612 06:36:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:07.556 06:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:07.820 [2024-11-20 06:36:39.608683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.820 [2024-11-20 06:36:39.608744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.820 [2024-11-20 06:36:39.608768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.820 [2024-11-20 06:36:39.608780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608911] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608948] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608977] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.608990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.609003] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.609016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.609028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.609040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.609053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.609065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.609077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.609089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.609100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 [2024-11-20 06:36:39.609112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1245b60 is same with the state(6) to be set 00:26:07.821 06:36:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2163664 00:26:14.388 { 00:26:14.388 "results": [ 00:26:14.388 { 00:26:14.388 "job": "NVMe0n1", 00:26:14.388 "core_mask": "0x1", 00:26:14.388 "workload": "verify", 00:26:14.388 "status": "finished", 00:26:14.388 "verify_range": { 00:26:14.388 "start": 0, 00:26:14.388 "length": 16384 00:26:14.388 }, 00:26:14.388 "queue_depth": 128, 00:26:14.388 "io_size": 4096, 00:26:14.388 "runtime": 15.004759, 00:26:14.388 "iops": 8478.110178244116, 00:26:14.388 "mibps": 33.11761788376608, 00:26:14.388 "io_failed": 15324, 00:26:14.388 "io_timeout": 0, 00:26:14.388 "avg_latency_us": 13447.695491155971, 00:26:14.388 "min_latency_us": 543.0992592592593, 00:26:14.388 "max_latency_us": 16602.453333333335 00:26:14.388 } 00:26:14.388 ], 00:26:14.388 "core_count": 1 00:26:14.388 } 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2163560 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2163560 ']' 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2163560 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2163560 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2163560' 00:26:14.388 killing process with pid 2163560 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2163560 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2163560 00:26:14.388 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:14.388 [2024-11-20 06:36:29.011396] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:26:14.388 [2024-11-20 06:36:29.011491] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2163560 ] 00:26:14.388 [2024-11-20 06:36:29.081934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.388 [2024-11-20 06:36:29.143385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.388 Running I/O for 15 seconds... 00:26:14.388 8632.00 IOPS, 33.72 MiB/s [2024-11-20T05:36:46.224Z] [2024-11-20 06:36:31.221662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.388 [2024-11-20 06:36:31.221718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.388 [2024-11-20 06:36:31.221747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.388 [2024-11-20 06:36:31.221764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.388 [2024-11-20 06:36:31.221791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:79624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.388 [2024-11-20 06:36:31.221805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.388 [2024-11-20 06:36:31.221821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:79632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.388 [2024-11-20 06:36:31.221834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.388 [2024-11-20 06:36:31.221858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.388 [2024-11-20 06:36:31.221872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.221888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.221917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.221931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:79656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.221945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.221960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.221975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.221989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:79672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:79680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:79688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:79696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:79704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:79712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:79744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:79784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:79792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:79808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:79816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:79824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.222974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:79928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.222987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.223001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.223014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.223027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.223040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.223056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.389 [2024-11-20 06:36:31.223069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.389 [2024-11-20 06:36:31.223083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:79976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:79984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:80080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:80104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:80120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:80128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.223983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:80208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.223996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.224010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.224022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.224036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:80224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.224048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.224062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:80232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.224075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.224088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.224101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.224115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.224128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.224142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.224155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.224170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.224183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.390 [2024-11-20 06:36:31.224197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.390 [2024-11-20 06:36:31.224210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:80280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:80296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:80312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:80352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:80368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.391 [2024-11-20 06:36:31.224571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:80376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:80384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:80392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:80400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:80408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:80416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:80432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:80440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:80448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:80464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:80472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.224980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.224994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:80488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:80496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:80504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:80520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:80528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:80536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:80544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:80552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:80560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:80576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.391 [2024-11-20 06:36:31.225395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.391 [2024-11-20 06:36:31.225410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:31.225423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:31.225438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:80600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:31.225462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:31.225478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:80608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:31.225492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:31.225507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:80616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:31.225521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:31.225535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dbc170 is same with the state(6) to be set 00:26:14.392 [2024-11-20 06:36:31.225552] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.392 [2024-11-20 06:36:31.225563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.392 [2024-11-20 06:36:31.225574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80624 len:8 PRP1 0x0 PRP2 0x0 00:26:14.392 [2024-11-20 06:36:31.225589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:31.225680] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:14.392 [2024-11-20 06:36:31.225735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.392 [2024-11-20 06:36:31.225754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:31.225773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.392 [2024-11-20 06:36:31.225797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:31.225814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.392 [2024-11-20 06:36:31.225828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:31.225842] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.392 [2024-11-20 06:36:31.225856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:31.225869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:14.392 [2024-11-20 06:36:31.229186] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:14.392 [2024-11-20 06:36:31.229227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9b560 (9): Bad file descriptor 00:26:14.392 [2024-11-20 06:36:31.412910] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:14.392 7807.50 IOPS, 30.50 MiB/s [2024-11-20T05:36:46.228Z] 8099.33 IOPS, 31.64 MiB/s [2024-11-20T05:36:46.228Z] 8307.75 IOPS, 32.45 MiB/s [2024-11-20T05:36:46.228Z] [2024-11-20 06:36:35.042790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.042853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.042880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.042905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.042922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.042935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.042949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.042963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.042976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.042990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:1000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:1016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:1032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:1056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.392 [2024-11-20 06:36:35.043554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.392 [2024-11-20 06:36:35.043582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.392 [2024-11-20 06:36:35.043624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.392 [2024-11-20 06:36:35.043639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.392 [2024-11-20 06:36:35.043655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.393 [2024-11-20 06:36:35.043700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.393 [2024-11-20 06:36:35.043729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.393 [2024-11-20 06:36:35.043756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.393 [2024-11-20 06:36:35.043785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.043812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:1072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.043839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.043868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.043896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.043923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.043951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:1112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.043979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.043994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.393 [2024-11-20 06:36:35.044038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.393 [2024-11-20 06:36:35.044067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:1128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:1136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:1144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:1160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:1184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:1200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:1216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:1224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:1240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:1248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:1272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.393 [2024-11-20 06:36:35.044664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:1280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.393 [2024-11-20 06:36:35.044678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:1288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:1296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:1304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:1312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:1328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:1336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:1344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:1352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:1360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.044975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:1368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.044988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:1376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:1400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:1408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:1424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:1432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:1448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:1456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:1464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:1472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:1488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:1496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:1504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:1512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:1520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:1528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:1536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:1544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:1560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:1568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:1584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.394 [2024-11-20 06:36:35.045841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.394 [2024-11-20 06:36:35.045855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.045869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:1608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.395 [2024-11-20 06:36:35.045882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.045897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.395 [2024-11-20 06:36:35.045910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.045929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:1624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.395 [2024-11-20 06:36:35.045942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.045957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.395 [2024-11-20 06:36:35.045970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.045985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.395 [2024-11-20 06:36:35.045998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.395 [2024-11-20 06:36:35.046025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.395 [2024-11-20 06:36:35.046054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:720 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046141] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:728 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046176] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:736 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:744 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046270] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046280] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:752 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046346] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046358] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:760 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:768 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046453] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:776 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046491] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046502] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:784 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046549] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046559] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:792 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046596] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:800 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046660] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046671] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046712] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046722] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:808 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046764] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046774] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:816 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046811] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046822] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:824 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:832 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:840 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.046960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.046971] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.046982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:848 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.046995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.047008] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.047018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.395 [2024-11-20 06:36:35.047029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:856 len:8 PRP1 0x0 PRP2 0x0 00:26:14.395 [2024-11-20 06:36:35.047042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.395 [2024-11-20 06:36:35.047055] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.395 [2024-11-20 06:36:35.047066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.396 [2024-11-20 06:36:35.047076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:864 len:8 PRP1 0x0 PRP2 0x0 00:26:14.396 [2024-11-20 06:36:35.047088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:35.047156] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:14.396 [2024-11-20 06:36:35.047198] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.396 [2024-11-20 06:36:35.047242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:35.047267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.396 [2024-11-20 06:36:35.047282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:35.047296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.396 [2024-11-20 06:36:35.047325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:35.047346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.396 [2024-11-20 06:36:35.047360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:35.047374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:14.396 [2024-11-20 06:36:35.050656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:14.396 [2024-11-20 06:36:35.050696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9b560 (9): Bad file descriptor 00:26:14.396 8316.40 IOPS, 32.49 MiB/s [2024-11-20T05:36:46.232Z] [2024-11-20 06:36:35.207281] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:14.396 8183.67 IOPS, 31.97 MiB/s [2024-11-20T05:36:46.232Z] 8246.86 IOPS, 32.21 MiB/s [2024-11-20T05:36:46.232Z] 8293.88 IOPS, 32.40 MiB/s [2024-11-20T05:36:46.232Z] 8341.78 IOPS, 32.59 MiB/s [2024-11-20T05:36:46.232Z] [2024-11-20 06:36:39.611637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:94064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.396 [2024-11-20 06:36:39.611679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.611714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:94304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.611730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.611746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:94312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.611760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.611776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:94320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.611790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.611805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:94328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.611820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.611835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:94336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.611850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.611865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:94344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.611887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.611903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:94352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.611916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.611930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:94360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.611943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.611958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:94368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.611971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.611986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:94376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.611999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:94384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:94392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:94400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:94408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:94416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:94424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:94432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:94440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:94448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:94456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:94464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:94472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:94480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:94488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:94496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:94504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:94512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:94520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:94528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:94536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.396 [2024-11-20 06:36:39.612639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:94544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.396 [2024-11-20 06:36:39.612656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:94552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.397 [2024-11-20 06:36:39.612686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:94560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.397 [2024-11-20 06:36:39.612715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:94568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.397 [2024-11-20 06:36:39.612744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:94576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.397 [2024-11-20 06:36:39.612773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:94584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.397 [2024-11-20 06:36:39.612801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:94072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.612831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:94080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.612858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:94088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.612887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:94096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.612916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.612945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:94112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.612974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.612989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:94120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:94128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:94136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:94144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:94160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:94168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:94176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:94184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:94192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:94200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:94208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:94216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:94224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:94232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:94240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:94248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:94256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:94264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:94272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:94280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:94288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.397 [2024-11-20 06:36:39.613633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:94296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:14.397 [2024-11-20 06:36:39.613646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:94592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:94600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:94608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:94616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:94624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:94632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:94640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:94648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:94656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:94664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:94672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.613980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:94680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.613994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:94688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:94696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:94704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:94712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:94720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:94728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:94736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:94744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:94752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:94760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:94768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:94776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:94784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:94792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:94800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:94808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:94816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:94824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:94832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:94840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:14.398 [2024-11-20 06:36:39.614584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614617] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.398 [2024-11-20 06:36:39.614634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94848 len:8 PRP1 0x0 PRP2 0x0 00:26:14.398 [2024-11-20 06:36:39.614647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.398 [2024-11-20 06:36:39.614676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.398 [2024-11-20 06:36:39.614688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94856 len:8 PRP1 0x0 PRP2 0x0 00:26:14.398 [2024-11-20 06:36:39.614700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614714] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.398 [2024-11-20 06:36:39.614725] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.398 [2024-11-20 06:36:39.614736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94864 len:8 PRP1 0x0 PRP2 0x0 00:26:14.398 [2024-11-20 06:36:39.614748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614761] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.398 [2024-11-20 06:36:39.614772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.398 [2024-11-20 06:36:39.614783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94872 len:8 PRP1 0x0 PRP2 0x0 00:26:14.398 [2024-11-20 06:36:39.614795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.398 [2024-11-20 06:36:39.614809] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.398 [2024-11-20 06:36:39.614819] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.398 [2024-11-20 06:36:39.614831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94880 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.614844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.614857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.614867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.614878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94888 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.614891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.614909] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.614921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.614932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94896 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.614944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.614957] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.614968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.614979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94904 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.614992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615005] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615015] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94912 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615052] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94920 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615099] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615110] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94928 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615147] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615158] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94936 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94944 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615254] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94952 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615295] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615314] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94960 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615363] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94968 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615400] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615410] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94976 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615447] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615458] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94984 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615494] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615505] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:94992 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615541] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615551] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95000 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615593] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95008 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95016 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95024 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95032 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615799] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95040 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615835] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615845] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95048 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615882] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.399 [2024-11-20 06:36:39.615904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95056 len:8 PRP1 0x0 PRP2 0x0 00:26:14.399 [2024-11-20 06:36:39.615917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.399 [2024-11-20 06:36:39.615930] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.399 [2024-11-20 06:36:39.615941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.400 [2024-11-20 06:36:39.615952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95064 len:8 PRP1 0x0 PRP2 0x0 00:26:14.400 [2024-11-20 06:36:39.615964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.400 [2024-11-20 06:36:39.615983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.400 [2024-11-20 06:36:39.615995] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.400 [2024-11-20 06:36:39.616006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95072 len:8 PRP1 0x0 PRP2 0x0 00:26:14.400 [2024-11-20 06:36:39.616019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.400 [2024-11-20 06:36:39.616035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:14.400 [2024-11-20 06:36:39.616047] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:14.400 [2024-11-20 06:36:39.616058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:95080 len:8 PRP1 0x0 PRP2 0x0 00:26:14.400 [2024-11-20 06:36:39.616070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.400 [2024-11-20 06:36:39.616132] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:14.400 [2024-11-20 06:36:39.616169] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.400 [2024-11-20 06:36:39.616188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.400 [2024-11-20 06:36:39.616202] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.400 [2024-11-20 06:36:39.616216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.400 [2024-11-20 06:36:39.616230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.400 [2024-11-20 06:36:39.616242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.400 [2024-11-20 06:36:39.616256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:14.400 [2024-11-20 06:36:39.616268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:14.400 [2024-11-20 06:36:39.616281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:14.400 [2024-11-20 06:36:39.616337] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d9b560 (9): Bad file descriptor 00:26:14.400 [2024-11-20 06:36:39.619582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:14.400 [2024-11-20 06:36:39.648003] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:14.400 8347.90 IOPS, 32.61 MiB/s [2024-11-20T05:36:46.236Z] 8388.64 IOPS, 32.77 MiB/s [2024-11-20T05:36:46.236Z] 8412.58 IOPS, 32.86 MiB/s [2024-11-20T05:36:46.236Z] 8442.23 IOPS, 32.98 MiB/s [2024-11-20T05:36:46.236Z] 8451.93 IOPS, 33.02 MiB/s 00:26:14.400 Latency(us) 00:26:14.400 [2024-11-20T05:36:46.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.400 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:14.400 Verification LBA range: start 0x0 length 0x4000 00:26:14.400 NVMe0n1 : 15.00 8478.11 33.12 1021.28 0.00 13447.70 543.10 16602.45 00:26:14.400 [2024-11-20T05:36:46.236Z] =================================================================================================================== 00:26:14.400 [2024-11-20T05:36:46.236Z] Total : 8478.11 33.12 1021.28 0.00 13447.70 543.10 16602.45 00:26:14.400 Received shutdown signal, test time was about 15.000000 seconds 00:26:14.400 00:26:14.400 Latency(us) 00:26:14.400 [2024-11-20T05:36:46.236Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:14.400 [2024-11-20T05:36:46.236Z] =================================================================================================================== 00:26:14.400 [2024-11-20T05:36:46.236Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2165425 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2165425 /var/tmp/bdevperf.sock 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # '[' -z 2165425 ']' 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:14.400 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@866 -- # return 0 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:14.400 [2024-11-20 06:36:45.916772] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:14.400 06:36:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:14.400 [2024-11-20 06:36:46.189595] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:14.400 06:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:14.966 NVMe0n1 00:26:14.966 06:36:46 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:15.224 00:26:15.224 06:36:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:15.791 00:26:15.791 06:36:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:15.791 06:36:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:16.049 06:36:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:16.307 06:36:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:19.585 06:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:19.585 06:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:19.585 06:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2166095 00:26:19.585 06:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:19.585 06:36:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2166095 00:26:20.958 { 00:26:20.958 "results": [ 00:26:20.958 { 00:26:20.958 "job": "NVMe0n1", 00:26:20.958 "core_mask": "0x1", 00:26:20.958 "workload": "verify", 00:26:20.958 "status": "finished", 00:26:20.958 "verify_range": { 00:26:20.958 "start": 0, 00:26:20.958 "length": 16384 00:26:20.958 }, 00:26:20.958 "queue_depth": 128, 00:26:20.958 "io_size": 4096, 00:26:20.958 "runtime": 1.052683, 00:26:20.958 "iops": 8449.837225451536, 00:26:20.958 "mibps": 33.00717666192006, 00:26:20.958 "io_failed": 0, 00:26:20.958 "io_timeout": 0, 00:26:20.958 "avg_latency_us": 14543.638112797455, 00:26:20.958 "min_latency_us": 3373.8903703703704, 00:26:20.958 "max_latency_us": 44079.02814814815 00:26:20.958 } 00:26:20.958 ], 00:26:20.958 "core_count": 1 00:26:20.958 } 00:26:20.958 06:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:20.958 [2024-11-20 06:36:45.416211] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:26:20.958 [2024-11-20 06:36:45.416343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2165425 ] 00:26:20.958 [2024-11-20 06:36:45.485868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.958 [2024-11-20 06:36:45.542203] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:20.958 [2024-11-20 06:36:48.038600] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:20.958 [2024-11-20 06:36:48.038699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.958 [2024-11-20 06:36:48.038721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.958 [2024-11-20 06:36:48.038752] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.958 [2024-11-20 06:36:48.038766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.958 [2024-11-20 06:36:48.038780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.958 [2024-11-20 06:36:48.038794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.958 [2024-11-20 06:36:48.038808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:20.958 [2024-11-20 06:36:48.038821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:20.958 [2024-11-20 06:36:48.038840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:20.958 [2024-11-20 06:36:48.038886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:20.958 [2024-11-20 06:36:48.038918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f32560 (9): Bad file descriptor 00:26:20.958 [2024-11-20 06:36:48.083470] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:20.958 Running I/O for 1 seconds... 00:26:20.958 8757.00 IOPS, 34.21 MiB/s 00:26:20.958 Latency(us) 00:26:20.958 [2024-11-20T05:36:52.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.958 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:20.958 Verification LBA range: start 0x0 length 0x4000 00:26:20.958 NVMe0n1 : 1.05 8449.84 33.01 0.00 0.00 14543.64 3373.89 44079.03 00:26:20.958 [2024-11-20T05:36:52.794Z] =================================================================================================================== 00:26:20.958 [2024-11-20T05:36:52.794Z] Total : 8449.84 33.01 0.00 0.00 14543.64 3373.89 44079.03 00:26:20.958 06:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:20.958 06:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:21.216 06:36:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:21.474 06:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:21.474 06:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:21.735 06:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:21.995 06:36:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:25.274 06:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:25.274 06:36:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:25.274 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2165425 00:26:25.275 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2165425 ']' 00:26:25.275 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2165425 00:26:25.275 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:25.275 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:25.275 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2165425 00:26:25.533 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:26:25.533 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:26:25.533 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2165425' 00:26:25.533 killing process with pid 2165425 00:26:25.533 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2165425 00:26:25.533 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2165425 00:26:25.533 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:25.533 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:25.790 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:25.790 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:25.790 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:25.790 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:25.790 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:26:25.790 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:25.791 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:26:25.791 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:25.791 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:26.048 rmmod nvme_tcp 00:26:26.048 rmmod nvme_fabrics 00:26:26.048 rmmod nvme_keyring 00:26:26.048 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:26.048 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:26:26.048 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:26:26.048 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 2163270 ']' 00:26:26.048 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 2163270 00:26:26.048 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # '[' -z 2163270 ']' 00:26:26.048 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # kill -0 2163270 00:26:26.048 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # uname 00:26:26.049 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:26.049 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2163270 00:26:26.049 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:26.049 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:26.049 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2163270' 00:26:26.049 killing process with pid 2163270 00:26:26.049 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@971 -- # kill 2163270 00:26:26.049 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@976 -- # wait 2163270 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:26.308 06:36:57 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.210 06:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:28.210 00:26:28.210 real 0m35.849s 00:26:28.210 user 2m6.319s 00:26:28.210 sys 0m5.949s 00:26:28.210 06:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:28.210 06:36:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:28.210 ************************************ 00:26:28.210 END TEST nvmf_failover 00:26:28.210 ************************************ 00:26:28.210 06:37:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:28.210 06:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:28.210 06:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:28.210 06:37:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:28.468 ************************************ 00:26:28.468 START TEST nvmf_host_discovery 00:26:28.468 ************************************ 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:28.468 * Looking for test storage... 00:26:28.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lcov --version 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:28.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.468 --rc genhtml_branch_coverage=1 00:26:28.468 --rc genhtml_function_coverage=1 00:26:28.468 --rc genhtml_legend=1 00:26:28.468 --rc geninfo_all_blocks=1 00:26:28.468 --rc geninfo_unexecuted_blocks=1 00:26:28.468 00:26:28.468 ' 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:28.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.468 --rc genhtml_branch_coverage=1 00:26:28.468 --rc genhtml_function_coverage=1 00:26:28.468 --rc genhtml_legend=1 00:26:28.468 --rc geninfo_all_blocks=1 00:26:28.468 --rc geninfo_unexecuted_blocks=1 00:26:28.468 00:26:28.468 ' 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:28.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.468 --rc genhtml_branch_coverage=1 00:26:28.468 --rc genhtml_function_coverage=1 00:26:28.468 --rc genhtml_legend=1 00:26:28.468 --rc geninfo_all_blocks=1 00:26:28.468 --rc geninfo_unexecuted_blocks=1 00:26:28.468 00:26:28.468 ' 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:28.468 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:28.468 --rc genhtml_branch_coverage=1 00:26:28.468 --rc genhtml_function_coverage=1 00:26:28.468 --rc genhtml_legend=1 00:26:28.468 --rc geninfo_all_blocks=1 00:26:28.468 --rc geninfo_unexecuted_blocks=1 00:26:28.468 00:26:28.468 ' 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:28.468 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:28.469 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:26:28.469 06:37:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:31.034 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:31.034 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:31.034 Found net devices under 0000:09:00.0: cvl_0_0 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:31.034 Found net devices under 0000:09:00.1: cvl_0_1 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:31.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:31.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.401 ms 00:26:31.034 00:26:31.034 --- 10.0.0.2 ping statistics --- 00:26:31.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.034 rtt min/avg/max/mdev = 0.401/0.401/0.401/0.000 ms 00:26:31.034 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:31.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:31.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.162 ms 00:26:31.035 00:26:31.035 --- 10.0.0.1 ping statistics --- 00:26:31.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:31.035 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=2168829 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 2168829 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2168829 ']' 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:31.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:31.035 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.035 [2024-11-20 06:37:02.643927] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:26:31.035 [2024-11-20 06:37:02.644010] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:31.035 [2024-11-20 06:37:02.711569] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.035 [2024-11-20 06:37:02.766476] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:31.035 [2024-11-20 06:37:02.766525] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:31.035 [2024-11-20 06:37:02.766553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:31.035 [2024-11-20 06:37:02.766564] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:31.035 [2024-11-20 06:37:02.766573] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:31.035 [2024-11-20 06:37:02.767156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.320 [2024-11-20 06:37:02.907381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.320 [2024-11-20 06:37:02.915565] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.320 null0 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.320 null1 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2168970 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2168970 /tmp/host.sock 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # '[' -z 2168970 ']' 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:31.320 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:31.320 06:37:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.320 [2024-11-20 06:37:02.988914] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:26:31.320 [2024-11-20 06:37:02.988994] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2168970 ] 00:26:31.320 [2024-11-20 06:37:03.053975] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.320 [2024-11-20 06:37:03.112179] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@866 -- # return 0 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.579 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.838 [2024-11-20 06:37:03.501081] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == \n\v\m\e\0 ]] 00:26:31.838 06:37:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:32.772 [2024-11-20 06:37:04.284462] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:32.772 [2024-11-20 06:37:04.284486] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:32.772 [2024-11-20 06:37:04.284508] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:32.772 [2024-11-20 06:37:04.370801] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:32.772 [2024-11-20 06:37:04.545934] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:32.772 [2024-11-20 06:37:04.546880] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16a2fa0:1 started. 00:26:32.772 [2024-11-20 06:37:04.548642] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:32.772 [2024-11-20 06:37:04.548662] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:32.772 [2024-11-20 06:37:04.553697] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16a2fa0 was disconnected and freed. delete nvme_qpair. 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0 ]] 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:33.030 [2024-11-20 06:37:04.838551] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x16717a0:1 started. 00:26:33.030 [2024-11-20 06:37:04.844031] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x16717a0 was disconnected and freed. delete nvme_qpair. 00:26:33.030 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.289 [2024-11-20 06:37:04.913043] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:33.289 [2024-11-20 06:37:04.914283] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:33.289 [2024-11-20 06:37:04.914325] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.289 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:33.290 06:37:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:33.290 06:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:33.290 06:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:33.290 06:37:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # sleep 1 00:26:33.290 [2024-11-20 06:37:05.042157] bdev_nvme.c:7308:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:33.290 [2024-11-20 06:37:05.100885] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:33.290 [2024-11-20 06:37:05.100933] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:33.290 [2024-11-20 06:37:05.100965] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:33.290 [2024-11-20 06:37:05.100974] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:34.224 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:34.224 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:34.224 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:34.224 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:34.224 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:34.224 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.224 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:34.224 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.224 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:34.224 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.484 [2024-11-20 06:37:06.124997] bdev_nvme.c:7366:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:34.484 [2024-11-20 06:37:06.125026] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:34.484 [2024-11-20 06:37:06.130076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.484 [2024-11-20 06:37:06.130107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.484 [2024-11-20 06:37:06.130137] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.484 [2024-11-20 06:37:06.130151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.484 [2024-11-20 06:37:06.130172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.484 [2024-11-20 06:37:06.130185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.484 [2024-11-20 06:37:06.130199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:34.484 [2024-11-20 06:37:06.130212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:34.484 [2024-11-20 06:37:06.130225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673550 is same with the state(6) to be set 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.484 [2024-11-20 06:37:06.140070] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1673550 (9): Bad file descriptor 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.484 [2024-11-20 06:37:06.150113] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.484 [2024-11-20 06:37:06.150136] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.484 [2024-11-20 06:37:06.150146] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.484 [2024-11-20 06:37:06.150154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.484 [2024-11-20 06:37:06.150198] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.484 [2024-11-20 06:37:06.150329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.484 [2024-11-20 06:37:06.150359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1673550 with addr=10.0.0.2, port=4420 00:26:34.484 [2024-11-20 06:37:06.150375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673550 is same with the state(6) to be set 00:26:34.484 [2024-11-20 06:37:06.150398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1673550 (9): Bad file descriptor 00:26:34.484 [2024-11-20 06:37:06.150419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.484 [2024-11-20 06:37:06.150433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.484 [2024-11-20 06:37:06.150448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.484 [2024-11-20 06:37:06.150461] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.484 [2024-11-20 06:37:06.150471] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.484 [2024-11-20 06:37:06.150479] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.484 [2024-11-20 06:37:06.160232] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.484 [2024-11-20 06:37:06.160254] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.484 [2024-11-20 06:37:06.160269] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.484 [2024-11-20 06:37:06.160277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.484 [2024-11-20 06:37:06.160327] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.484 [2024-11-20 06:37:06.160457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.484 [2024-11-20 06:37:06.160485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1673550 with addr=10.0.0.2, port=4420 00:26:34.484 [2024-11-20 06:37:06.160502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673550 is same with the state(6) to be set 00:26:34.484 [2024-11-20 06:37:06.160523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1673550 (9): Bad file descriptor 00:26:34.484 [2024-11-20 06:37:06.160544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.484 [2024-11-20 06:37:06.160559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.484 [2024-11-20 06:37:06.160572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.484 [2024-11-20 06:37:06.160584] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.484 [2024-11-20 06:37:06.160593] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.484 [2024-11-20 06:37:06.160601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.484 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:34.484 [2024-11-20 06:37:06.170363] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.484 [2024-11-20 06:37:06.170389] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.484 [2024-11-20 06:37:06.170398] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.484 [2024-11-20 06:37:06.170406] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.484 [2024-11-20 06:37:06.170432] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:34.485 [2024-11-20 06:37:06.170555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.485 [2024-11-20 06:37:06.170595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1673550 with addr=10.0.0.2, port=4420 00:26:34.485 [2024-11-20 06:37:06.170611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673550 is same with the state(6) to be set 00:26:34.485 [2024-11-20 06:37:06.170633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1673550 (9): Bad file descriptor 00:26:34.485 [2024-11-20 06:37:06.170653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.485 [2024-11-20 06:37:06.170667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.485 [2024-11-20 06:37:06.170680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.485 [2024-11-20 06:37:06.170692] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.485 [2024-11-20 06:37:06.170701] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.485 [2024-11-20 06:37:06.170720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:34.485 [2024-11-20 06:37:06.180469] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.485 [2024-11-20 06:37:06.180494] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.485 [2024-11-20 06:37:06.180504] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.485 [2024-11-20 06:37:06.180512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.485 [2024-11-20 06:37:06.180537] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.485 [2024-11-20 06:37:06.180640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.485 [2024-11-20 06:37:06.180670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1673550 with addr=10.0.0.2, port=4420 00:26:34.485 [2024-11-20 06:37:06.180686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673550 is same with the state(6) to be set 00:26:34.485 [2024-11-20 06:37:06.180708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1673550 (9): Bad file descriptor 00:26:34.485 [2024-11-20 06:37:06.180742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.485 [2024-11-20 06:37:06.180760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.485 [2024-11-20 06:37:06.180773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.485 [2024-11-20 06:37:06.180785] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.485 [2024-11-20 06:37:06.180794] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.485 [2024-11-20 06:37:06.180802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.485 [2024-11-20 06:37:06.190572] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.485 [2024-11-20 06:37:06.190609] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.485 [2024-11-20 06:37:06.190619] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.485 [2024-11-20 06:37:06.190626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.485 [2024-11-20 06:37:06.190655] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.485 [2024-11-20 06:37:06.190805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.485 [2024-11-20 06:37:06.190833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1673550 with addr=10.0.0.2, port=4420 00:26:34.485 [2024-11-20 06:37:06.190849] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673550 is same with the state(6) to be set 00:26:34.485 [2024-11-20 06:37:06.190871] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1673550 (9): Bad file descriptor 00:26:34.485 [2024-11-20 06:37:06.190904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.485 [2024-11-20 06:37:06.190922] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.485 [2024-11-20 06:37:06.190935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.485 [2024-11-20 06:37:06.190947] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.485 [2024-11-20 06:37:06.190956] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.485 [2024-11-20 06:37:06.190963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.485 [2024-11-20 06:37:06.200689] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.485 [2024-11-20 06:37:06.200710] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.485 [2024-11-20 06:37:06.200719] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.485 [2024-11-20 06:37:06.200726] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.485 [2024-11-20 06:37:06.200763] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.485 [2024-11-20 06:37:06.200930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.485 [2024-11-20 06:37:06.200958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1673550 with addr=10.0.0.2, port=4420 00:26:34.485 [2024-11-20 06:37:06.200974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673550 is same with the state(6) to be set 00:26:34.485 [2024-11-20 06:37:06.200995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1673550 (9): Bad file descriptor 00:26:34.485 [2024-11-20 06:37:06.201027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.485 [2024-11-20 06:37:06.201045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.485 [2024-11-20 06:37:06.201058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.485 [2024-11-20 06:37:06.201070] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.485 [2024-11-20 06:37:06.201079] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.485 [2024-11-20 06:37:06.201086] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.485 [2024-11-20 06:37:06.210798] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:34.485 [2024-11-20 06:37:06.210835] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:34.485 [2024-11-20 06:37:06.210849] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:34.485 [2024-11-20 06:37:06.210857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:34.485 [2024-11-20 06:37:06.210896] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:34.485 [2024-11-20 06:37:06.210995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:34.485 [2024-11-20 06:37:06.211037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1673550 with addr=10.0.0.2, port=4420 00:26:34.485 [2024-11-20 06:37:06.211053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1673550 is same with the state(6) to be set 00:26:34.485 [2024-11-20 06:37:06.211075] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1673550 (9): Bad file descriptor 00:26:34.485 [2024-11-20 06:37:06.211109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:34.485 [2024-11-20 06:37:06.211127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:34.485 [2024-11-20 06:37:06.211140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:34.485 [2024-11-20 06:37:06.211151] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:34.485 [2024-11-20 06:37:06.211160] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:34.485 [2024-11-20 06:37:06.211167] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:34.485 [2024-11-20 06:37:06.212104] bdev_nvme.c:7171:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:34.485 [2024-11-20 06:37:06.212132] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:34.485 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_paths nvme0 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ 4421 == \4\4\2\1 ]] 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.486 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_subsystem_names 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_bdev_list 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # [[ '' == '' ]] 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # local max=10 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # (( max-- )) 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # get_notification_count 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # (( notification_count == expected_count )) 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # return 0 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:34.744 06:37:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:35.738 [2024-11-20 06:37:07.493435] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:35.738 [2024-11-20 06:37:07.493468] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:35.738 [2024-11-20 06:37:07.493502] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:35.995 [2024-11-20 06:37:07.579808] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:36.254 [2024-11-20 06:37:07.888388] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:36.254 [2024-11-20 06:37:07.889165] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1670880:1 started. 00:26:36.254 [2024-11-20 06:37:07.891342] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:36.254 [2024-11-20 06:37:07.891386] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:36.254 [2024-11-20 06:37:07.893024] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1670880 was disconnected and freed. delete nvme_qpair. 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.254 request: 00:26:36.254 { 00:26:36.254 "name": "nvme", 00:26:36.254 "trtype": "tcp", 00:26:36.254 "traddr": "10.0.0.2", 00:26:36.254 "adrfam": "ipv4", 00:26:36.254 "trsvcid": "8009", 00:26:36.254 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:36.254 "wait_for_attach": true, 00:26:36.254 "method": "bdev_nvme_start_discovery", 00:26:36.254 "req_id": 1 00:26:36.254 } 00:26:36.254 Got JSON-RPC error response 00:26:36.254 response: 00:26:36.254 { 00:26:36.254 "code": -17, 00:26:36.254 "message": "File exists" 00:26:36.254 } 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.254 request: 00:26:36.254 { 00:26:36.254 "name": "nvme_second", 00:26:36.254 "trtype": "tcp", 00:26:36.254 "traddr": "10.0.0.2", 00:26:36.254 "adrfam": "ipv4", 00:26:36.254 "trsvcid": "8009", 00:26:36.254 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:36.254 "wait_for_attach": true, 00:26:36.254 "method": "bdev_nvme_start_discovery", 00:26:36.254 "req_id": 1 00:26:36.254 } 00:26:36.254 Got JSON-RPC error response 00:26:36.254 response: 00:26:36.254 { 00:26:36.254 "code": -17, 00:26:36.254 "message": "File exists" 00:26:36.254 } 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:36.254 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:36.255 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:36.255 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:36.255 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:36.255 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:36.255 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.255 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:36.255 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.255 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:36.255 06:37:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:36.255 06:37:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:37.625 [2024-11-20 06:37:09.082680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:37.625 [2024-11-20 06:37:09.082734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1670e70 with addr=10.0.0.2, port=8010 00:26:37.625 [2024-11-20 06:37:09.082761] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:37.625 [2024-11-20 06:37:09.082775] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:37.625 [2024-11-20 06:37:09.082788] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:38.559 [2024-11-20 06:37:10.085166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:38.559 [2024-11-20 06:37:10.085232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1670e70 with addr=10.0.0.2, port=8010 00:26:38.559 [2024-11-20 06:37:10.085260] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:38.559 [2024-11-20 06:37:10.085274] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:38.559 [2024-11-20 06:37:10.085287] bdev_nvme.c:7452:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:39.493 [2024-11-20 06:37:11.087376] bdev_nvme.c:7427:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:39.493 request: 00:26:39.493 { 00:26:39.493 "name": "nvme_second", 00:26:39.493 "trtype": "tcp", 00:26:39.493 "traddr": "10.0.0.2", 00:26:39.493 "adrfam": "ipv4", 00:26:39.493 "trsvcid": "8010", 00:26:39.493 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:39.493 "wait_for_attach": false, 00:26:39.493 "attach_timeout_ms": 3000, 00:26:39.493 "method": "bdev_nvme_start_discovery", 00:26:39.493 "req_id": 1 00:26:39.493 } 00:26:39.493 Got JSON-RPC error response 00:26:39.493 response: 00:26:39.493 { 00:26:39.493 "code": -110, 00:26:39.493 "message": "Connection timed out" 00:26:39.493 } 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2168970 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:39.493 rmmod nvme_tcp 00:26:39.493 rmmod nvme_fabrics 00:26:39.493 rmmod nvme_keyring 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 2168829 ']' 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 2168829 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # '[' -z 2168829 ']' 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # kill -0 2168829 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # uname 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:26:39.493 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2168829 00:26:39.494 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:26:39.494 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:26:39.494 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2168829' 00:26:39.494 killing process with pid 2168829 00:26:39.494 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@971 -- # kill 2168829 00:26:39.494 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@976 -- # wait 2168829 00:26:39.752 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:39.752 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:39.752 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:39.752 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:26:39.752 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:26:39.753 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:39.753 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:26:39.753 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:39.753 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:39.753 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:39.753 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:39.753 06:37:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.288 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:42.288 00:26:42.288 real 0m13.453s 00:26:42.288 user 0m19.195s 00:26:42.288 sys 0m2.890s 00:26:42.288 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1128 -- # xtrace_disable 00:26:42.288 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:42.288 ************************************ 00:26:42.288 END TEST nvmf_host_discovery 00:26:42.288 ************************************ 00:26:42.288 06:37:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:42.288 06:37:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:26:42.288 06:37:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:26:42.288 06:37:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:42.288 ************************************ 00:26:42.288 START TEST nvmf_host_multipath_status 00:26:42.288 ************************************ 00:26:42.288 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:42.288 * Looking for test storage... 00:26:42.288 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:42.288 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lcov --version 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:26:42.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.289 --rc genhtml_branch_coverage=1 00:26:42.289 --rc genhtml_function_coverage=1 00:26:42.289 --rc genhtml_legend=1 00:26:42.289 --rc geninfo_all_blocks=1 00:26:42.289 --rc geninfo_unexecuted_blocks=1 00:26:42.289 00:26:42.289 ' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:26:42.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.289 --rc genhtml_branch_coverage=1 00:26:42.289 --rc genhtml_function_coverage=1 00:26:42.289 --rc genhtml_legend=1 00:26:42.289 --rc geninfo_all_blocks=1 00:26:42.289 --rc geninfo_unexecuted_blocks=1 00:26:42.289 00:26:42.289 ' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:26:42.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.289 --rc genhtml_branch_coverage=1 00:26:42.289 --rc genhtml_function_coverage=1 00:26:42.289 --rc genhtml_legend=1 00:26:42.289 --rc geninfo_all_blocks=1 00:26:42.289 --rc geninfo_unexecuted_blocks=1 00:26:42.289 00:26:42.289 ' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:26:42.289 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:42.289 --rc genhtml_branch_coverage=1 00:26:42.289 --rc genhtml_function_coverage=1 00:26:42.289 --rc genhtml_legend=1 00:26:42.289 --rc geninfo_all_blocks=1 00:26:42.289 --rc geninfo_unexecuted_blocks=1 00:26:42.289 00:26:42.289 ' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:42.289 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:42.289 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:26:42.290 06:37:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:26:44.196 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:26:44.197 Found 0000:09:00.0 (0x8086 - 0x159b) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:26:44.197 Found 0000:09:00.1 (0x8086 - 0x159b) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:26:44.197 Found net devices under 0000:09:00.0: cvl_0_0 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:26:44.197 Found net devices under 0000:09:00.1: cvl_0_1 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:44.197 06:37:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:44.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:44.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:26:44.456 00:26:44.456 --- 10.0.0.2 ping statistics --- 00:26:44.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.456 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:44.456 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:44.456 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:26:44.456 00:26:44.456 --- 10.0.0.1 ping statistics --- 00:26:44.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:44.456 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=2172021 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 2172021 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2172021 ']' 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:44.456 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:44.456 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:44.456 [2024-11-20 06:37:16.178257] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:26:44.456 [2024-11-20 06:37:16.178365] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:44.456 [2024-11-20 06:37:16.248807] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:26:44.714 [2024-11-20 06:37:16.303392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:44.714 [2024-11-20 06:37:16.303440] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:44.714 [2024-11-20 06:37:16.303467] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:44.714 [2024-11-20 06:37:16.303478] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:44.714 [2024-11-20 06:37:16.303487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:44.714 [2024-11-20 06:37:16.304905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.714 [2024-11-20 06:37:16.304911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:44.714 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:44.714 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:44.714 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:44.714 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:44.715 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:44.715 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:44.715 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2172021 00:26:44.715 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:44.973 [2024-11-20 06:37:16.751254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:44.973 06:37:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:45.538 Malloc0 00:26:45.538 06:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:26:45.796 06:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:46.054 06:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:46.312 [2024-11-20 06:37:17.911927] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.312 06:37:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:46.570 [2024-11-20 06:37:18.184640] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:46.570 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2172305 00:26:46.570 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:26:46.570 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:26:46.570 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2172305 /var/tmp/bdevperf.sock 00:26:46.571 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # '[' -z 2172305 ']' 00:26:46.571 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:46.571 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # local max_retries=100 00:26:46.571 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:46.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:46.571 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # xtrace_disable 00:26:46.571 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:26:46.829 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:26:46.829 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@866 -- # return 0 00:26:46.829 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:26:47.087 06:37:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:47.653 Nvme0n1 00:26:47.653 06:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:26:47.911 Nvme0n1 00:26:47.911 06:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:26:47.911 06:37:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:26:50.440 06:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:26:50.440 06:37:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:26:50.440 06:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:50.697 06:37:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:26:51.633 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:26:51.633 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:51.633 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.633 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:51.891 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:51.891 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:51.891 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:51.891 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:52.149 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:52.149 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:52.149 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.149 06:37:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:52.407 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.407 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:52.407 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.407 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:52.974 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.974 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:52.974 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.974 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:52.974 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:52.974 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:52.974 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:52.974 06:37:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:53.232 06:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:53.232 06:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:26:53.232 06:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:53.798 06:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:26:53.798 06:37:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:26:55.173 06:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:26:55.173 06:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:26:55.173 06:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.173 06:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:55.173 06:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:55.173 06:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:26:55.173 06:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.173 06:37:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:55.431 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.431 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:55.431 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.431 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:55.690 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.690 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:55.690 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.690 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:55.948 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:55.948 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:55.948 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:55.948 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:56.207 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.207 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:56.207 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:56.207 06:37:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:56.465 06:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:56.465 06:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:26:56.465 06:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:26:56.723 06:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:26:56.983 06:37:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:26:58.387 06:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:26:58.387 06:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:26:58.387 06:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.387 06:37:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:26:58.387 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.387 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:26:58.387 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.387 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:26:58.646 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:26:58.646 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:26:58.646 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.646 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:26:58.903 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:58.903 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:26:58.903 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:58.903 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:26:59.161 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.161 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:26:59.161 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.161 06:37:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:26:59.419 06:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.419 06:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:26:59.419 06:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:26:59.419 06:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:26:59.677 06:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:26:59.677 06:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:26:59.677 06:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:00.243 06:37:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:00.243 06:37:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:01.615 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:01.615 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:01.615 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.615 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:01.615 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:01.615 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:01.615 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.615 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:01.874 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:01.874 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:01.874 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:01.874 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:02.131 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.131 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:02.131 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.131 06:37:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:02.389 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.389 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:02.389 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.389 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:02.647 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:02.647 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:02.647 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:02.647 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:02.905 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:02.905 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:02.905 06:37:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:03.470 06:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:03.470 06:37:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:04.841 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:04.841 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:04.841 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.841 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:04.842 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:04.842 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:04.842 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:04.842 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:05.099 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.099 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:05.099 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.099 06:37:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:05.356 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.356 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:05.356 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.356 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:05.614 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:05.614 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:05.614 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.614 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:05.872 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:05.872 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:05.872 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:05.872 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:06.130 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:06.130 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:06.130 06:37:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:06.387 06:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:06.645 06:37:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:08.017 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:08.017 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:08.017 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.017 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:08.017 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:08.017 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:08.017 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.017 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:08.275 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.275 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:08.275 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.275 06:37:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:08.533 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.533 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:08.533 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.533 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:08.791 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.791 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:08.791 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:08.791 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:09.048 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:09.048 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:09.048 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.048 06:37:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:09.306 06:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.306 06:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:09.563 06:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:09.563 06:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:10.129 06:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:10.129 06:37:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:11.504 06:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:11.504 06:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:11.504 06:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.504 06:37:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:11.504 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.504 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:11.504 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.504 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:11.763 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:11.763 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:11.763 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:11.763 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:12.021 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.021 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:12.021 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.021 06:37:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.280 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.280 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.280 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.280 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:12.538 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.538 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:12.538 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.538 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:12.796 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.796 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:12.796 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:13.054 06:37:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:13.620 06:37:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:14.555 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:14.555 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:14.555 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.555 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:14.813 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:14.813 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:14.813 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:14.813 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.072 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.072 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.072 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.072 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.331 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.331 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.331 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.331 06:37:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.590 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.590 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:15.590 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.590 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:15.848 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.848 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:15.848 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.848 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.106 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:16.106 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:16.106 06:37:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:16.364 06:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:16.623 06:37:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:17.558 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:17.558 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:17.558 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.558 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:17.816 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:17.816 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:17.816 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.816 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:18.074 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.074 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:18.074 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.074 06:37:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.332 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.332 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.332 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.332 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.898 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.898 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:18.898 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.898 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:18.898 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.898 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:18.898 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.898 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:19.156 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:19.156 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:19.156 06:37:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:19.723 06:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:19.723 06:37:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:21.096 06:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:21.096 06:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:21.096 06:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.096 06:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:21.096 06:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.096 06:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:21.096 06:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.096 06:37:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:21.377 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.377 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:21.377 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.377 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:21.635 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.635 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:21.635 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.635 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:21.893 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.893 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:21.893 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.894 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:22.459 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:22.459 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:22.459 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:22.459 06:37:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:22.459 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:22.459 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2172305 00:27:22.459 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2172305 ']' 00:27:22.459 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2172305 00:27:22.459 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:27:22.459 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:22.459 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2172305 00:27:22.724 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_2 00:27:22.724 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_2 = sudo ']' 00:27:22.724 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2172305' 00:27:22.724 killing process with pid 2172305 00:27:22.724 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2172305 00:27:22.724 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2172305 00:27:22.724 { 00:27:22.724 "results": [ 00:27:22.724 { 00:27:22.724 "job": "Nvme0n1", 00:27:22.724 "core_mask": "0x4", 00:27:22.724 "workload": "verify", 00:27:22.724 "status": "terminated", 00:27:22.724 "verify_range": { 00:27:22.724 "start": 0, 00:27:22.724 "length": 16384 00:27:22.724 }, 00:27:22.724 "queue_depth": 128, 00:27:22.724 "io_size": 4096, 00:27:22.724 "runtime": 34.393696, 00:27:22.724 "iops": 8000.710362736241, 00:27:22.724 "mibps": 31.252774854438442, 00:27:22.724 "io_failed": 0, 00:27:22.724 "io_timeout": 0, 00:27:22.724 "avg_latency_us": 15970.908858798837, 00:27:22.724 "min_latency_us": 628.0533333333333, 00:27:22.724 "max_latency_us": 4026531.84 00:27:22.724 } 00:27:22.724 ], 00:27:22.724 "core_count": 1 00:27:22.724 } 00:27:22.724 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2172305 00:27:22.724 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:22.724 [2024-11-20 06:37:18.248461] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:27:22.724 [2024-11-20 06:37:18.248549] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2172305 ] 00:27:22.724 [2024-11-20 06:37:18.317637] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.724 [2024-11-20 06:37:18.380011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:22.724 Running I/O for 90 seconds... 00:27:22.724 8469.00 IOPS, 33.08 MiB/s [2024-11-20T05:37:54.560Z] 8521.00 IOPS, 33.29 MiB/s [2024-11-20T05:37:54.560Z] 8588.67 IOPS, 33.55 MiB/s [2024-11-20T05:37:54.560Z] 8604.50 IOPS, 33.61 MiB/s [2024-11-20T05:37:54.560Z] 8562.40 IOPS, 33.45 MiB/s [2024-11-20T05:37:54.560Z] 8574.00 IOPS, 33.49 MiB/s [2024-11-20T05:37:54.560Z] 8550.00 IOPS, 33.40 MiB/s [2024-11-20T05:37:54.560Z] 8561.50 IOPS, 33.44 MiB/s [2024-11-20T05:37:54.560Z] 8570.67 IOPS, 33.48 MiB/s [2024-11-20T05:37:54.560Z] 8576.20 IOPS, 33.50 MiB/s [2024-11-20T05:37:54.560Z] 8557.82 IOPS, 33.43 MiB/s [2024-11-20T05:37:54.560Z] 8571.58 IOPS, 33.48 MiB/s [2024-11-20T05:37:54.560Z] 8564.69 IOPS, 33.46 MiB/s [2024-11-20T05:37:54.560Z] 8557.86 IOPS, 33.43 MiB/s [2024-11-20T05:37:54.560Z] 8560.00 IOPS, 33.44 MiB/s [2024-11-20T05:37:54.560Z] [2024-11-20 06:37:34.984525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.724 [2024-11-20 06:37:34.984592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:22.724 [2024-11-20 06:37:34.984656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.724 [2024-11-20 06:37:34.984678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:22.724 [2024-11-20 06:37:34.984703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.724 [2024-11-20 06:37:34.984720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:22.724 [2024-11-20 06:37:34.984743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.724 [2024-11-20 06:37:34.984760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:22.724 [2024-11-20 06:37:34.984782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.724 [2024-11-20 06:37:34.984798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:22.724 [2024-11-20 06:37:34.984820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.724 [2024-11-20 06:37:34.984837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.984859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.984876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.984898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.984915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.984937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.984954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.985976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.985999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.986015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.986036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.986053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.986074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.986090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.986111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.986127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.986164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.986181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:22.725 [2024-11-20 06:37:34.986202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.725 [2024-11-20 06:37:34.986234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.986963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.986990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.726 [2024-11-20 06:37:34.987008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:116504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.726 [2024-11-20 06:37:34.987329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.987378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.987421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.987462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.987504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.987547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.987588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.987631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.987672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.726 [2024-11-20 06:37:34.987715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:22.726 [2024-11-20 06:37:34.987740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.987757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.987788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.987806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.987831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.987848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.987873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.987890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.987915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.987932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.987957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.987974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.987999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:117136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:117144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:117152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:117160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:117168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:117176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:117184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:117192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:117208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:117216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:117224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:117232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:117240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:117248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:117264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:117272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:117280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:117288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:117312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.988960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.988985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:117320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.989001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.989026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:117328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.989043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.989068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:117336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.989084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.989110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:117344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.989134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.989161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:117352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.989178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.989203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:117360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.727 [2024-11-20 06:37:34.989220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:22.727 [2024-11-20 06:37:34.989244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:117368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:117376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:117384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:117392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:117408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:117416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:117424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:117440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:117448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:117456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.989773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:116512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.728 [2024-11-20 06:37:34.989815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:116520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.728 [2024-11-20 06:37:34.989857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:116528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.728 [2024-11-20 06:37:34.989904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.989930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:116536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.728 [2024-11-20 06:37:34.989946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.990110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:116544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.728 [2024-11-20 06:37:34.990132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.990165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:117472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.990183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.990212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:117480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.990229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.990258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.990275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.990311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.990330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.990360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:117504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.990377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:34.990407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:117512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:34.990424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:22.728 8078.12 IOPS, 31.56 MiB/s [2024-11-20T05:37:54.564Z] 7602.94 IOPS, 29.70 MiB/s [2024-11-20T05:37:54.564Z] 7180.56 IOPS, 28.05 MiB/s [2024-11-20T05:37:54.564Z] 6802.63 IOPS, 26.57 MiB/s [2024-11-20T05:37:54.564Z] 6840.35 IOPS, 26.72 MiB/s [2024-11-20T05:37:54.564Z] 6918.86 IOPS, 27.03 MiB/s [2024-11-20T05:37:54.564Z] 7011.73 IOPS, 27.39 MiB/s [2024-11-20T05:37:54.564Z] 7177.57 IOPS, 28.04 MiB/s [2024-11-20T05:37:54.564Z] 7334.46 IOPS, 28.65 MiB/s [2024-11-20T05:37:54.564Z] 7490.68 IOPS, 29.26 MiB/s [2024-11-20T05:37:54.564Z] 7526.77 IOPS, 29.40 MiB/s [2024-11-20T05:37:54.564Z] 7566.67 IOPS, 29.56 MiB/s [2024-11-20T05:37:54.564Z] 7599.50 IOPS, 29.69 MiB/s [2024-11-20T05:37:54.564Z] 7676.07 IOPS, 29.98 MiB/s [2024-11-20T05:37:54.564Z] 7796.23 IOPS, 30.45 MiB/s [2024-11-20T05:37:54.564Z] 7896.90 IOPS, 30.85 MiB/s [2024-11-20T05:37:54.564Z] [2024-11-20 06:37:51.538050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:51.538109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:51.539611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:44936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:51.539647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:51.539676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:44952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:51.539695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:51.539719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.728 [2024-11-20 06:37:51.539735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:22.728 [2024-11-20 06:37:51.539758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:44984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.539774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.539795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.539812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.539833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:45016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.539850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.539871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:45032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.539887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.539908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:45048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.539924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.539946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:45064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.539962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.539984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:45080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.540001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:45096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.540039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:45112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.540090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:45128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.540128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:45144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.540188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:45160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.540227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:45176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.729 [2024-11-20 06:37:51.540265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:44360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:44392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:44424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:44456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:44488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:44520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:44552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:44584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:44616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.540981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:44272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.540996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.541017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:44304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.541032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:22.729 [2024-11-20 06:37:51.541052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:44336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.729 [2024-11-20 06:37:51.541068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:44400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:44464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:44496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.730 [2024-11-20 06:37:51.541517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:44688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:44720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:44752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:44784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:44816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:44912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:45208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.730 [2024-11-20 06:37:51.541897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.730 [2024-11-20 06:37:51.541936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:44648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.541973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.541995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:22.730 [2024-11-20 06:37:51.542011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:22.730 [2024-11-20 06:37:51.542033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:22.730 [2024-11-20 06:37:51.542049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:22.730 7959.22 IOPS, 31.09 MiB/s [2024-11-20T05:37:54.566Z] 7980.79 IOPS, 31.17 MiB/s [2024-11-20T05:37:54.566Z] 7999.94 IOPS, 31.25 MiB/s [2024-11-20T05:37:54.566Z] Received shutdown signal, test time was about 34.394485 seconds 00:27:22.730 00:27:22.730 Latency(us) 00:27:22.730 [2024-11-20T05:37:54.566Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:22.730 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:22.730 Verification LBA range: start 0x0 length 0x4000 00:27:22.730 Nvme0n1 : 34.39 8000.71 31.25 0.00 0.00 15970.91 628.05 4026531.84 00:27:22.730 [2024-11-20T05:37:54.566Z] =================================================================================================================== 00:27:22.730 [2024-11-20T05:37:54.566Z] Total : 8000.71 31.25 0.00 0.00 15970.91 628.05 4026531.84 00:27:22.730 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:23.077 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:23.077 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:23.077 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:23.077 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:23.077 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:27:23.077 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:23.077 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:27:23.077 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:23.077 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:23.077 rmmod nvme_tcp 00:27:23.077 rmmod nvme_fabrics 00:27:23.077 rmmod nvme_keyring 00:27:23.077 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 2172021 ']' 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 2172021 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # '[' -z 2172021 ']' 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # kill -0 2172021 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # uname 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2172021 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2172021' 00:27:23.336 killing process with pid 2172021 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@971 -- # kill 2172021 00:27:23.336 06:37:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@976 -- # wait 2172021 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:23.594 06:37:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.496 06:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:25.496 00:27:25.496 real 0m43.689s 00:27:25.496 user 2m11.530s 00:27:25.496 sys 0m11.439s 00:27:25.496 06:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:25.496 06:37:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:25.496 ************************************ 00:27:25.496 END TEST nvmf_host_multipath_status 00:27:25.496 ************************************ 00:27:25.496 06:37:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:25.496 06:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:25.496 06:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:25.496 06:37:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:25.496 ************************************ 00:27:25.496 START TEST nvmf_discovery_remove_ifc 00:27:25.496 ************************************ 00:27:25.496 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:25.756 * Looking for test storage... 00:27:25.756 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lcov --version 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:25.756 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.757 --rc genhtml_branch_coverage=1 00:27:25.757 --rc genhtml_function_coverage=1 00:27:25.757 --rc genhtml_legend=1 00:27:25.757 --rc geninfo_all_blocks=1 00:27:25.757 --rc geninfo_unexecuted_blocks=1 00:27:25.757 00:27:25.757 ' 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.757 --rc genhtml_branch_coverage=1 00:27:25.757 --rc genhtml_function_coverage=1 00:27:25.757 --rc genhtml_legend=1 00:27:25.757 --rc geninfo_all_blocks=1 00:27:25.757 --rc geninfo_unexecuted_blocks=1 00:27:25.757 00:27:25.757 ' 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.757 --rc genhtml_branch_coverage=1 00:27:25.757 --rc genhtml_function_coverage=1 00:27:25.757 --rc genhtml_legend=1 00:27:25.757 --rc geninfo_all_blocks=1 00:27:25.757 --rc geninfo_unexecuted_blocks=1 00:27:25.757 00:27:25.757 ' 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:25.757 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:25.757 --rc genhtml_branch_coverage=1 00:27:25.757 --rc genhtml_function_coverage=1 00:27:25.757 --rc genhtml_legend=1 00:27:25.757 --rc geninfo_all_blocks=1 00:27:25.757 --rc geninfo_unexecuted_blocks=1 00:27:25.757 00:27:25.757 ' 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:25.757 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:25.758 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:27:25.758 06:37:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:28.296 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:28.296 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:28.296 Found net devices under 0000:09:00.0: cvl_0_0 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:28.296 Found net devices under 0000:09:00.1: cvl_0_1 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:28.296 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:28.297 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:28.297 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:27:28.297 00:27:28.297 --- 10.0.0.2 ping statistics --- 00:27:28.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.297 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:28.297 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:28.297 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:27:28.297 00:27:28.297 --- 10.0.0.1 ping statistics --- 00:27:28.297 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:28.297 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=2178782 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 2178782 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2178782 ']' 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:28.297 06:37:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.297 [2024-11-20 06:37:59.803991] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:27:28.297 [2024-11-20 06:37:59.804069] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:28.297 [2024-11-20 06:37:59.872958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.297 [2024-11-20 06:37:59.926390] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:28.297 [2024-11-20 06:37:59.926446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:28.297 [2024-11-20 06:37:59.926473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:28.297 [2024-11-20 06:37:59.926484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:28.297 [2024-11-20 06:37:59.926494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:28.297 [2024-11-20 06:37:59.927055] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:28.297 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:28.297 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:27:28.297 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:28.297 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:28.297 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.297 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:28.297 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:28.297 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.297 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.297 [2024-11-20 06:38:00.128857] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.556 [2024-11-20 06:38:00.137032] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:28.556 null0 00:27:28.556 [2024-11-20 06:38:00.168950] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:28.557 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.557 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2178801 00:27:28.557 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:28.557 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2178801 /tmp/host.sock 00:27:28.557 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # '[' -z 2178801 ']' 00:27:28.557 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@837 -- # local rpc_addr=/tmp/host.sock 00:27:28.557 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:28.557 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:28.557 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:28.557 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:28.557 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.557 [2024-11-20 06:38:00.237640] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:27:28.557 [2024-11-20 06:38:00.237733] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2178801 ] 00:27:28.557 [2024-11-20 06:38:00.306792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.557 [2024-11-20 06:38:00.367849] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.814 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:28.814 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@866 -- # return 0 00:27:28.814 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.815 06:38:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.187 [2024-11-20 06:38:01.636107] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:30.187 [2024-11-20 06:38:01.636130] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:30.187 [2024-11-20 06:38:01.636155] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:30.187 [2024-11-20 06:38:01.723489] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:30.187 [2024-11-20 06:38:01.947760] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:30.187 [2024-11-20 06:38:01.948756] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x206bc00:1 started. 00:27:30.187 [2024-11-20 06:38:01.950400] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:30.187 [2024-11-20 06:38:01.950452] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:30.187 [2024-11-20 06:38:01.950491] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:30.187 [2024-11-20 06:38:01.950514] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:30.187 [2024-11-20 06:38:01.950537] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.187 [2024-11-20 06:38:01.955458] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x206bc00 was disconnected and freed. delete nvme_qpair. 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:30.187 06:38:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:27:30.187 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:30.445 06:38:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:31.380 06:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:31.380 06:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:31.380 06:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:31.380 06:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:31.380 06:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:31.380 06:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:31.380 06:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:31.380 06:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:31.380 06:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:31.380 06:38:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:32.314 06:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:32.314 06:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:32.314 06:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:32.314 06:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:32.314 06:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:32.571 06:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:32.571 06:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:32.571 06:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:32.571 06:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:32.571 06:38:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:33.504 06:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:33.504 06:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:33.504 06:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:33.504 06:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:33.504 06:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:33.504 06:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:33.504 06:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:33.504 06:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:33.504 06:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:33.504 06:38:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:34.439 06:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:34.439 06:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:34.439 06:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:34.439 06:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:34.439 06:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:34.439 06:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:34.439 06:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:34.439 06:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:34.697 06:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:34.697 06:38:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:35.630 06:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:35.630 06:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:35.630 06:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:35.630 06:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:35.630 06:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:35.630 06:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:35.630 06:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:35.630 06:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:35.630 06:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:35.630 06:38:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:35.630 [2024-11-20 06:38:07.392223] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:35.630 [2024-11-20 06:38:07.392295] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.630 [2024-11-20 06:38:07.392323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.630 [2024-11-20 06:38:07.392355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.630 [2024-11-20 06:38:07.392368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.630 [2024-11-20 06:38:07.392382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.630 [2024-11-20 06:38:07.392394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.630 [2024-11-20 06:38:07.392408] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.630 [2024-11-20 06:38:07.392421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.630 [2024-11-20 06:38:07.392434] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:35.630 [2024-11-20 06:38:07.392446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:35.630 [2024-11-20 06:38:07.392459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2048400 is same with the state(6) to be set 00:27:35.630 [2024-11-20 06:38:07.402243] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2048400 (9): Bad file descriptor 00:27:35.630 [2024-11-20 06:38:07.412308] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:35.630 [2024-11-20 06:38:07.412331] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:35.630 [2024-11-20 06:38:07.412351] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:35.630 [2024-11-20 06:38:07.412359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:35.630 [2024-11-20 06:38:07.412410] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:36.565 06:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:36.565 06:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:36.565 06:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:36.565 06:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:36.565 06:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:36.565 06:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:36.565 06:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:36.823 [2024-11-20 06:38:08.471346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:36.823 [2024-11-20 06:38:08.471398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2048400 with addr=10.0.0.2, port=4420 00:27:36.823 [2024-11-20 06:38:08.471422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2048400 is same with the state(6) to be set 00:27:36.823 [2024-11-20 06:38:08.471463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2048400 (9): Bad file descriptor 00:27:36.823 [2024-11-20 06:38:08.471931] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:36.824 [2024-11-20 06:38:08.471972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:36.824 [2024-11-20 06:38:08.471989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:36.824 [2024-11-20 06:38:08.472014] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:36.824 [2024-11-20 06:38:08.472027] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:36.824 [2024-11-20 06:38:08.472037] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:36.824 [2024-11-20 06:38:08.472045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:36.824 [2024-11-20 06:38:08.472059] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:36.824 [2024-11-20 06:38:08.472068] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:36.824 06:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:36.824 06:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:36.824 06:38:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:37.757 [2024-11-20 06:38:09.474554] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:37.757 [2024-11-20 06:38:09.474595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:37.757 [2024-11-20 06:38:09.474620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:37.757 [2024-11-20 06:38:09.474632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:37.757 [2024-11-20 06:38:09.474658] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:37.757 [2024-11-20 06:38:09.474670] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:37.757 [2024-11-20 06:38:09.474678] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:37.757 [2024-11-20 06:38:09.474685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:37.757 [2024-11-20 06:38:09.474728] bdev_nvme.c:7135:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:37.757 [2024-11-20 06:38:09.474778] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.758 [2024-11-20 06:38:09.474798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.758 [2024-11-20 06:38:09.474815] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.758 [2024-11-20 06:38:09.474828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.758 [2024-11-20 06:38:09.474841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.758 [2024-11-20 06:38:09.474852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.758 [2024-11-20 06:38:09.474865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.758 [2024-11-20 06:38:09.474877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.758 [2024-11-20 06:38:09.474890] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:37.758 [2024-11-20 06:38:09.474902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:37.758 [2024-11-20 06:38:09.474914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:37.758 [2024-11-20 06:38:09.475078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2037b40 (9): Bad file descriptor 00:27:37.758 [2024-11-20 06:38:09.476096] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:37.758 [2024-11-20 06:38:09.476116] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:37.758 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.016 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:38.016 06:38:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:38.951 06:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.951 06:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.951 06:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.951 06:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:38.951 06:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:38.951 06:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.951 06:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.951 06:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:38.951 06:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:38.951 06:38:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:39.885 [2024-11-20 06:38:11.532421] bdev_nvme.c:7384:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:39.885 [2024-11-20 06:38:11.532447] bdev_nvme.c:7470:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:39.885 [2024-11-20 06:38:11.532470] bdev_nvme.c:7347:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:39.885 [2024-11-20 06:38:11.618775] bdev_nvme.c:7313:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:39.885 06:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:39.885 06:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:39.885 06:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:39.885 06:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:39.885 06:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:39.885 06:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:39.885 06:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:39.885 06:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:39.885 [2024-11-20 06:38:11.673479] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:39.885 [2024-11-20 06:38:11.674238] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x2052a70:1 started. 00:27:39.885 [2024-11-20 06:38:11.675602] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:39.885 [2024-11-20 06:38:11.675659] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:39.886 [2024-11-20 06:38:11.675692] bdev_nvme.c:8180:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:39.886 [2024-11-20 06:38:11.675714] bdev_nvme.c:7203:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:39.886 [2024-11-20 06:38:11.675727] bdev_nvme.c:7162:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:39.886 [2024-11-20 06:38:11.681124] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x2052a70 was disconnected and freed. delete nvme_qpair. 00:27:39.886 06:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:39.886 06:38:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2178801 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2178801 ']' 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2178801 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2178801 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2178801' 00:27:41.260 killing process with pid 2178801 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2178801 00:27:41.260 06:38:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2178801 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:41.260 rmmod nvme_tcp 00:27:41.260 rmmod nvme_fabrics 00:27:41.260 rmmod nvme_keyring 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 2178782 ']' 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 2178782 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # '[' -z 2178782 ']' 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # kill -0 2178782 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # uname 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:27:41.260 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2178782 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2178782' 00:27:41.519 killing process with pid 2178782 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@971 -- # kill 2178782 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@976 -- # wait 2178782 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:27:41.519 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:41.520 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:27:41.520 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:41.520 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:41.520 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:41.520 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:41.520 06:38:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:44.055 00:27:44.055 real 0m18.082s 00:27:44.055 user 0m26.212s 00:27:44.055 sys 0m3.155s 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.055 ************************************ 00:27:44.055 END TEST nvmf_discovery_remove_ifc 00:27:44.055 ************************************ 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:44.055 ************************************ 00:27:44.055 START TEST nvmf_identify_kernel_target 00:27:44.055 ************************************ 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:44.055 * Looking for test storage... 00:27:44.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lcov --version 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:44.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.055 --rc genhtml_branch_coverage=1 00:27:44.055 --rc genhtml_function_coverage=1 00:27:44.055 --rc genhtml_legend=1 00:27:44.055 --rc geninfo_all_blocks=1 00:27:44.055 --rc geninfo_unexecuted_blocks=1 00:27:44.055 00:27:44.055 ' 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:44.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.055 --rc genhtml_branch_coverage=1 00:27:44.055 --rc genhtml_function_coverage=1 00:27:44.055 --rc genhtml_legend=1 00:27:44.055 --rc geninfo_all_blocks=1 00:27:44.055 --rc geninfo_unexecuted_blocks=1 00:27:44.055 00:27:44.055 ' 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:44.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.055 --rc genhtml_branch_coverage=1 00:27:44.055 --rc genhtml_function_coverage=1 00:27:44.055 --rc genhtml_legend=1 00:27:44.055 --rc geninfo_all_blocks=1 00:27:44.055 --rc geninfo_unexecuted_blocks=1 00:27:44.055 00:27:44.055 ' 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:44.055 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:44.055 --rc genhtml_branch_coverage=1 00:27:44.055 --rc genhtml_function_coverage=1 00:27:44.055 --rc genhtml_legend=1 00:27:44.055 --rc geninfo_all_blocks=1 00:27:44.055 --rc geninfo_unexecuted_blocks=1 00:27:44.055 00:27:44.055 ' 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.055 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:44.056 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:27:44.056 06:38:15 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:45.967 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:45.967 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:45.967 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:45.968 Found net devices under 0000:09:00.0: cvl_0_0 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:45.968 Found net devices under 0000:09:00.1: cvl_0_1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:45.968 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:45.968 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:27:45.968 00:27:45.968 --- 10.0.0.2 ping statistics --- 00:27:45.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.968 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:45.968 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:45.968 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.091 ms 00:27:45.968 00:27:45.968 --- 10.0.0.1 ping statistics --- 00:27:45.968 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:45.968 rtt min/avg/max/mdev = 0.091/0.091/0.091/0.000 ms 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:45.968 06:38:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:47.347 Waiting for block devices as requested 00:27:47.347 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:47.347 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:47.347 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:47.606 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:47.606 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:47.606 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:47.606 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:47.864 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:47.864 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:48.121 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:48.121 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:48.121 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:48.121 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:48.381 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:48.381 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:48.381 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:48.381 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:48.639 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:48.639 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:48.639 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:48.639 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:48.639 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:48.639 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:48.639 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:48.640 No valid GPT data, bailing 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:48.640 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:48.900 00:27:48.900 Discovery Log Number of Records 2, Generation counter 2 00:27:48.900 =====Discovery Log Entry 0====== 00:27:48.900 trtype: tcp 00:27:48.900 adrfam: ipv4 00:27:48.900 subtype: current discovery subsystem 00:27:48.900 treq: not specified, sq flow control disable supported 00:27:48.900 portid: 1 00:27:48.900 trsvcid: 4420 00:27:48.900 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:48.900 traddr: 10.0.0.1 00:27:48.900 eflags: none 00:27:48.900 sectype: none 00:27:48.900 =====Discovery Log Entry 1====== 00:27:48.900 trtype: tcp 00:27:48.900 adrfam: ipv4 00:27:48.900 subtype: nvme subsystem 00:27:48.900 treq: not specified, sq flow control disable supported 00:27:48.900 portid: 1 00:27:48.900 trsvcid: 4420 00:27:48.900 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:48.900 traddr: 10.0.0.1 00:27:48.900 eflags: none 00:27:48.900 sectype: none 00:27:48.900 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:48.901 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:48.901 ===================================================== 00:27:48.901 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:48.901 ===================================================== 00:27:48.901 Controller Capabilities/Features 00:27:48.901 ================================ 00:27:48.901 Vendor ID: 0000 00:27:48.901 Subsystem Vendor ID: 0000 00:27:48.901 Serial Number: 2d882d5abb35e3b0ef5e 00:27:48.901 Model Number: Linux 00:27:48.901 Firmware Version: 6.8.9-20 00:27:48.901 Recommended Arb Burst: 0 00:27:48.901 IEEE OUI Identifier: 00 00 00 00:27:48.901 Multi-path I/O 00:27:48.901 May have multiple subsystem ports: No 00:27:48.901 May have multiple controllers: No 00:27:48.901 Associated with SR-IOV VF: No 00:27:48.901 Max Data Transfer Size: Unlimited 00:27:48.901 Max Number of Namespaces: 0 00:27:48.901 Max Number of I/O Queues: 1024 00:27:48.901 NVMe Specification Version (VS): 1.3 00:27:48.901 NVMe Specification Version (Identify): 1.3 00:27:48.901 Maximum Queue Entries: 1024 00:27:48.901 Contiguous Queues Required: No 00:27:48.901 Arbitration Mechanisms Supported 00:27:48.901 Weighted Round Robin: Not Supported 00:27:48.901 Vendor Specific: Not Supported 00:27:48.901 Reset Timeout: 7500 ms 00:27:48.901 Doorbell Stride: 4 bytes 00:27:48.901 NVM Subsystem Reset: Not Supported 00:27:48.901 Command Sets Supported 00:27:48.901 NVM Command Set: Supported 00:27:48.901 Boot Partition: Not Supported 00:27:48.901 Memory Page Size Minimum: 4096 bytes 00:27:48.901 Memory Page Size Maximum: 4096 bytes 00:27:48.901 Persistent Memory Region: Not Supported 00:27:48.901 Optional Asynchronous Events Supported 00:27:48.901 Namespace Attribute Notices: Not Supported 00:27:48.901 Firmware Activation Notices: Not Supported 00:27:48.901 ANA Change Notices: Not Supported 00:27:48.901 PLE Aggregate Log Change Notices: Not Supported 00:27:48.901 LBA Status Info Alert Notices: Not Supported 00:27:48.901 EGE Aggregate Log Change Notices: Not Supported 00:27:48.901 Normal NVM Subsystem Shutdown event: Not Supported 00:27:48.901 Zone Descriptor Change Notices: Not Supported 00:27:48.901 Discovery Log Change Notices: Supported 00:27:48.901 Controller Attributes 00:27:48.901 128-bit Host Identifier: Not Supported 00:27:48.901 Non-Operational Permissive Mode: Not Supported 00:27:48.901 NVM Sets: Not Supported 00:27:48.901 Read Recovery Levels: Not Supported 00:27:48.901 Endurance Groups: Not Supported 00:27:48.901 Predictable Latency Mode: Not Supported 00:27:48.901 Traffic Based Keep ALive: Not Supported 00:27:48.901 Namespace Granularity: Not Supported 00:27:48.901 SQ Associations: Not Supported 00:27:48.901 UUID List: Not Supported 00:27:48.901 Multi-Domain Subsystem: Not Supported 00:27:48.901 Fixed Capacity Management: Not Supported 00:27:48.901 Variable Capacity Management: Not Supported 00:27:48.901 Delete Endurance Group: Not Supported 00:27:48.901 Delete NVM Set: Not Supported 00:27:48.901 Extended LBA Formats Supported: Not Supported 00:27:48.901 Flexible Data Placement Supported: Not Supported 00:27:48.901 00:27:48.901 Controller Memory Buffer Support 00:27:48.901 ================================ 00:27:48.901 Supported: No 00:27:48.901 00:27:48.901 Persistent Memory Region Support 00:27:48.901 ================================ 00:27:48.901 Supported: No 00:27:48.901 00:27:48.901 Admin Command Set Attributes 00:27:48.901 ============================ 00:27:48.901 Security Send/Receive: Not Supported 00:27:48.901 Format NVM: Not Supported 00:27:48.901 Firmware Activate/Download: Not Supported 00:27:48.901 Namespace Management: Not Supported 00:27:48.901 Device Self-Test: Not Supported 00:27:48.901 Directives: Not Supported 00:27:48.901 NVMe-MI: Not Supported 00:27:48.901 Virtualization Management: Not Supported 00:27:48.901 Doorbell Buffer Config: Not Supported 00:27:48.901 Get LBA Status Capability: Not Supported 00:27:48.901 Command & Feature Lockdown Capability: Not Supported 00:27:48.901 Abort Command Limit: 1 00:27:48.901 Async Event Request Limit: 1 00:27:48.901 Number of Firmware Slots: N/A 00:27:48.901 Firmware Slot 1 Read-Only: N/A 00:27:48.901 Firmware Activation Without Reset: N/A 00:27:48.901 Multiple Update Detection Support: N/A 00:27:48.901 Firmware Update Granularity: No Information Provided 00:27:48.901 Per-Namespace SMART Log: No 00:27:48.901 Asymmetric Namespace Access Log Page: Not Supported 00:27:48.901 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:48.901 Command Effects Log Page: Not Supported 00:27:48.901 Get Log Page Extended Data: Supported 00:27:48.901 Telemetry Log Pages: Not Supported 00:27:48.901 Persistent Event Log Pages: Not Supported 00:27:48.901 Supported Log Pages Log Page: May Support 00:27:48.901 Commands Supported & Effects Log Page: Not Supported 00:27:48.901 Feature Identifiers & Effects Log Page:May Support 00:27:48.901 NVMe-MI Commands & Effects Log Page: May Support 00:27:48.901 Data Area 4 for Telemetry Log: Not Supported 00:27:48.901 Error Log Page Entries Supported: 1 00:27:48.901 Keep Alive: Not Supported 00:27:48.901 00:27:48.901 NVM Command Set Attributes 00:27:48.901 ========================== 00:27:48.901 Submission Queue Entry Size 00:27:48.901 Max: 1 00:27:48.901 Min: 1 00:27:48.901 Completion Queue Entry Size 00:27:48.901 Max: 1 00:27:48.901 Min: 1 00:27:48.901 Number of Namespaces: 0 00:27:48.901 Compare Command: Not Supported 00:27:48.901 Write Uncorrectable Command: Not Supported 00:27:48.901 Dataset Management Command: Not Supported 00:27:48.901 Write Zeroes Command: Not Supported 00:27:48.901 Set Features Save Field: Not Supported 00:27:48.901 Reservations: Not Supported 00:27:48.901 Timestamp: Not Supported 00:27:48.901 Copy: Not Supported 00:27:48.901 Volatile Write Cache: Not Present 00:27:48.901 Atomic Write Unit (Normal): 1 00:27:48.901 Atomic Write Unit (PFail): 1 00:27:48.901 Atomic Compare & Write Unit: 1 00:27:48.901 Fused Compare & Write: Not Supported 00:27:48.901 Scatter-Gather List 00:27:48.901 SGL Command Set: Supported 00:27:48.901 SGL Keyed: Not Supported 00:27:48.901 SGL Bit Bucket Descriptor: Not Supported 00:27:48.901 SGL Metadata Pointer: Not Supported 00:27:48.901 Oversized SGL: Not Supported 00:27:48.901 SGL Metadata Address: Not Supported 00:27:48.901 SGL Offset: Supported 00:27:48.901 Transport SGL Data Block: Not Supported 00:27:48.901 Replay Protected Memory Block: Not Supported 00:27:48.901 00:27:48.901 Firmware Slot Information 00:27:48.901 ========================= 00:27:48.901 Active slot: 0 00:27:48.901 00:27:48.901 00:27:48.901 Error Log 00:27:48.901 ========= 00:27:48.901 00:27:48.901 Active Namespaces 00:27:48.901 ================= 00:27:48.901 Discovery Log Page 00:27:48.901 ================== 00:27:48.901 Generation Counter: 2 00:27:48.901 Number of Records: 2 00:27:48.901 Record Format: 0 00:27:48.901 00:27:48.901 Discovery Log Entry 0 00:27:48.901 ---------------------- 00:27:48.901 Transport Type: 3 (TCP) 00:27:48.901 Address Family: 1 (IPv4) 00:27:48.901 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:48.901 Entry Flags: 00:27:48.902 Duplicate Returned Information: 0 00:27:48.902 Explicit Persistent Connection Support for Discovery: 0 00:27:48.902 Transport Requirements: 00:27:48.902 Secure Channel: Not Specified 00:27:48.902 Port ID: 1 (0x0001) 00:27:48.902 Controller ID: 65535 (0xffff) 00:27:48.902 Admin Max SQ Size: 32 00:27:48.902 Transport Service Identifier: 4420 00:27:48.902 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:48.902 Transport Address: 10.0.0.1 00:27:48.902 Discovery Log Entry 1 00:27:48.902 ---------------------- 00:27:48.902 Transport Type: 3 (TCP) 00:27:48.902 Address Family: 1 (IPv4) 00:27:48.902 Subsystem Type: 2 (NVM Subsystem) 00:27:48.902 Entry Flags: 00:27:48.902 Duplicate Returned Information: 0 00:27:48.902 Explicit Persistent Connection Support for Discovery: 0 00:27:48.902 Transport Requirements: 00:27:48.902 Secure Channel: Not Specified 00:27:48.902 Port ID: 1 (0x0001) 00:27:48.902 Controller ID: 65535 (0xffff) 00:27:48.902 Admin Max SQ Size: 32 00:27:48.902 Transport Service Identifier: 4420 00:27:48.902 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:48.902 Transport Address: 10.0.0.1 00:27:48.902 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:48.902 get_feature(0x01) failed 00:27:48.902 get_feature(0x02) failed 00:27:48.902 get_feature(0x04) failed 00:27:48.902 ===================================================== 00:27:48.902 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:48.902 ===================================================== 00:27:48.902 Controller Capabilities/Features 00:27:48.902 ================================ 00:27:48.902 Vendor ID: 0000 00:27:48.902 Subsystem Vendor ID: 0000 00:27:48.902 Serial Number: 407147e448a49bf336da 00:27:48.902 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:48.902 Firmware Version: 6.8.9-20 00:27:48.902 Recommended Arb Burst: 6 00:27:48.902 IEEE OUI Identifier: 00 00 00 00:27:48.902 Multi-path I/O 00:27:48.902 May have multiple subsystem ports: Yes 00:27:48.902 May have multiple controllers: Yes 00:27:48.902 Associated with SR-IOV VF: No 00:27:48.902 Max Data Transfer Size: Unlimited 00:27:48.902 Max Number of Namespaces: 1024 00:27:48.902 Max Number of I/O Queues: 128 00:27:48.902 NVMe Specification Version (VS): 1.3 00:27:48.902 NVMe Specification Version (Identify): 1.3 00:27:48.902 Maximum Queue Entries: 1024 00:27:48.902 Contiguous Queues Required: No 00:27:48.902 Arbitration Mechanisms Supported 00:27:48.902 Weighted Round Robin: Not Supported 00:27:48.902 Vendor Specific: Not Supported 00:27:48.902 Reset Timeout: 7500 ms 00:27:48.902 Doorbell Stride: 4 bytes 00:27:48.902 NVM Subsystem Reset: Not Supported 00:27:48.902 Command Sets Supported 00:27:48.902 NVM Command Set: Supported 00:27:48.902 Boot Partition: Not Supported 00:27:48.902 Memory Page Size Minimum: 4096 bytes 00:27:48.902 Memory Page Size Maximum: 4096 bytes 00:27:48.902 Persistent Memory Region: Not Supported 00:27:48.902 Optional Asynchronous Events Supported 00:27:48.902 Namespace Attribute Notices: Supported 00:27:48.902 Firmware Activation Notices: Not Supported 00:27:48.902 ANA Change Notices: Supported 00:27:48.902 PLE Aggregate Log Change Notices: Not Supported 00:27:48.902 LBA Status Info Alert Notices: Not Supported 00:27:48.902 EGE Aggregate Log Change Notices: Not Supported 00:27:48.902 Normal NVM Subsystem Shutdown event: Not Supported 00:27:48.902 Zone Descriptor Change Notices: Not Supported 00:27:48.902 Discovery Log Change Notices: Not Supported 00:27:48.902 Controller Attributes 00:27:48.902 128-bit Host Identifier: Supported 00:27:48.902 Non-Operational Permissive Mode: Not Supported 00:27:48.902 NVM Sets: Not Supported 00:27:48.902 Read Recovery Levels: Not Supported 00:27:48.902 Endurance Groups: Not Supported 00:27:48.902 Predictable Latency Mode: Not Supported 00:27:48.902 Traffic Based Keep ALive: Supported 00:27:48.902 Namespace Granularity: Not Supported 00:27:48.902 SQ Associations: Not Supported 00:27:48.902 UUID List: Not Supported 00:27:48.902 Multi-Domain Subsystem: Not Supported 00:27:48.902 Fixed Capacity Management: Not Supported 00:27:48.902 Variable Capacity Management: Not Supported 00:27:48.902 Delete Endurance Group: Not Supported 00:27:48.902 Delete NVM Set: Not Supported 00:27:48.902 Extended LBA Formats Supported: Not Supported 00:27:48.902 Flexible Data Placement Supported: Not Supported 00:27:48.902 00:27:48.902 Controller Memory Buffer Support 00:27:48.902 ================================ 00:27:48.902 Supported: No 00:27:48.902 00:27:48.902 Persistent Memory Region Support 00:27:48.902 ================================ 00:27:48.902 Supported: No 00:27:48.902 00:27:48.902 Admin Command Set Attributes 00:27:48.902 ============================ 00:27:48.902 Security Send/Receive: Not Supported 00:27:48.902 Format NVM: Not Supported 00:27:48.902 Firmware Activate/Download: Not Supported 00:27:48.902 Namespace Management: Not Supported 00:27:48.902 Device Self-Test: Not Supported 00:27:48.902 Directives: Not Supported 00:27:48.902 NVMe-MI: Not Supported 00:27:48.902 Virtualization Management: Not Supported 00:27:48.902 Doorbell Buffer Config: Not Supported 00:27:48.902 Get LBA Status Capability: Not Supported 00:27:48.902 Command & Feature Lockdown Capability: Not Supported 00:27:48.902 Abort Command Limit: 4 00:27:48.902 Async Event Request Limit: 4 00:27:48.902 Number of Firmware Slots: N/A 00:27:48.902 Firmware Slot 1 Read-Only: N/A 00:27:48.902 Firmware Activation Without Reset: N/A 00:27:48.902 Multiple Update Detection Support: N/A 00:27:48.902 Firmware Update Granularity: No Information Provided 00:27:48.902 Per-Namespace SMART Log: Yes 00:27:48.902 Asymmetric Namespace Access Log Page: Supported 00:27:48.902 ANA Transition Time : 10 sec 00:27:48.902 00:27:48.902 Asymmetric Namespace Access Capabilities 00:27:48.902 ANA Optimized State : Supported 00:27:48.902 ANA Non-Optimized State : Supported 00:27:48.902 ANA Inaccessible State : Supported 00:27:48.902 ANA Persistent Loss State : Supported 00:27:48.902 ANA Change State : Supported 00:27:48.902 ANAGRPID is not changed : No 00:27:48.902 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:48.902 00:27:48.902 ANA Group Identifier Maximum : 128 00:27:48.902 Number of ANA Group Identifiers : 128 00:27:48.902 Max Number of Allowed Namespaces : 1024 00:27:48.902 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:48.902 Command Effects Log Page: Supported 00:27:48.903 Get Log Page Extended Data: Supported 00:27:48.903 Telemetry Log Pages: Not Supported 00:27:48.903 Persistent Event Log Pages: Not Supported 00:27:48.903 Supported Log Pages Log Page: May Support 00:27:48.903 Commands Supported & Effects Log Page: Not Supported 00:27:48.903 Feature Identifiers & Effects Log Page:May Support 00:27:48.903 NVMe-MI Commands & Effects Log Page: May Support 00:27:48.903 Data Area 4 for Telemetry Log: Not Supported 00:27:48.903 Error Log Page Entries Supported: 128 00:27:48.903 Keep Alive: Supported 00:27:48.903 Keep Alive Granularity: 1000 ms 00:27:48.903 00:27:48.903 NVM Command Set Attributes 00:27:48.903 ========================== 00:27:48.903 Submission Queue Entry Size 00:27:48.903 Max: 64 00:27:48.903 Min: 64 00:27:48.903 Completion Queue Entry Size 00:27:48.903 Max: 16 00:27:48.903 Min: 16 00:27:48.903 Number of Namespaces: 1024 00:27:48.903 Compare Command: Not Supported 00:27:48.903 Write Uncorrectable Command: Not Supported 00:27:48.903 Dataset Management Command: Supported 00:27:48.903 Write Zeroes Command: Supported 00:27:48.903 Set Features Save Field: Not Supported 00:27:48.903 Reservations: Not Supported 00:27:48.903 Timestamp: Not Supported 00:27:48.903 Copy: Not Supported 00:27:48.903 Volatile Write Cache: Present 00:27:48.903 Atomic Write Unit (Normal): 1 00:27:48.903 Atomic Write Unit (PFail): 1 00:27:48.903 Atomic Compare & Write Unit: 1 00:27:48.903 Fused Compare & Write: Not Supported 00:27:48.903 Scatter-Gather List 00:27:48.903 SGL Command Set: Supported 00:27:48.903 SGL Keyed: Not Supported 00:27:48.903 SGL Bit Bucket Descriptor: Not Supported 00:27:48.903 SGL Metadata Pointer: Not Supported 00:27:48.903 Oversized SGL: Not Supported 00:27:48.903 SGL Metadata Address: Not Supported 00:27:48.903 SGL Offset: Supported 00:27:48.903 Transport SGL Data Block: Not Supported 00:27:48.903 Replay Protected Memory Block: Not Supported 00:27:48.903 00:27:48.903 Firmware Slot Information 00:27:48.903 ========================= 00:27:48.903 Active slot: 0 00:27:48.903 00:27:48.903 Asymmetric Namespace Access 00:27:48.903 =========================== 00:27:48.903 Change Count : 0 00:27:48.903 Number of ANA Group Descriptors : 1 00:27:48.903 ANA Group Descriptor : 0 00:27:48.903 ANA Group ID : 1 00:27:48.903 Number of NSID Values : 1 00:27:48.903 Change Count : 0 00:27:48.903 ANA State : 1 00:27:48.903 Namespace Identifier : 1 00:27:48.903 00:27:48.903 Commands Supported and Effects 00:27:48.903 ============================== 00:27:48.903 Admin Commands 00:27:48.903 -------------- 00:27:48.903 Get Log Page (02h): Supported 00:27:48.903 Identify (06h): Supported 00:27:48.903 Abort (08h): Supported 00:27:48.903 Set Features (09h): Supported 00:27:48.903 Get Features (0Ah): Supported 00:27:48.903 Asynchronous Event Request (0Ch): Supported 00:27:48.903 Keep Alive (18h): Supported 00:27:48.903 I/O Commands 00:27:48.903 ------------ 00:27:48.903 Flush (00h): Supported 00:27:48.903 Write (01h): Supported LBA-Change 00:27:48.903 Read (02h): Supported 00:27:48.903 Write Zeroes (08h): Supported LBA-Change 00:27:48.903 Dataset Management (09h): Supported 00:27:48.903 00:27:48.903 Error Log 00:27:48.903 ========= 00:27:48.903 Entry: 0 00:27:48.903 Error Count: 0x3 00:27:48.903 Submission Queue Id: 0x0 00:27:48.903 Command Id: 0x5 00:27:48.903 Phase Bit: 0 00:27:48.903 Status Code: 0x2 00:27:48.903 Status Code Type: 0x0 00:27:48.903 Do Not Retry: 1 00:27:48.903 Error Location: 0x28 00:27:48.903 LBA: 0x0 00:27:48.903 Namespace: 0x0 00:27:48.903 Vendor Log Page: 0x0 00:27:48.903 ----------- 00:27:48.903 Entry: 1 00:27:48.903 Error Count: 0x2 00:27:48.903 Submission Queue Id: 0x0 00:27:48.903 Command Id: 0x5 00:27:48.903 Phase Bit: 0 00:27:48.903 Status Code: 0x2 00:27:48.903 Status Code Type: 0x0 00:27:48.903 Do Not Retry: 1 00:27:48.903 Error Location: 0x28 00:27:48.903 LBA: 0x0 00:27:48.903 Namespace: 0x0 00:27:48.903 Vendor Log Page: 0x0 00:27:48.903 ----------- 00:27:48.903 Entry: 2 00:27:48.903 Error Count: 0x1 00:27:48.903 Submission Queue Id: 0x0 00:27:48.903 Command Id: 0x4 00:27:48.903 Phase Bit: 0 00:27:48.903 Status Code: 0x2 00:27:48.903 Status Code Type: 0x0 00:27:48.903 Do Not Retry: 1 00:27:48.903 Error Location: 0x28 00:27:48.903 LBA: 0x0 00:27:48.903 Namespace: 0x0 00:27:48.903 Vendor Log Page: 0x0 00:27:48.903 00:27:48.903 Number of Queues 00:27:48.903 ================ 00:27:48.903 Number of I/O Submission Queues: 128 00:27:48.903 Number of I/O Completion Queues: 128 00:27:48.903 00:27:48.903 ZNS Specific Controller Data 00:27:48.903 ============================ 00:27:48.903 Zone Append Size Limit: 0 00:27:48.903 00:27:48.903 00:27:48.903 Active Namespaces 00:27:48.903 ================= 00:27:48.903 get_feature(0x05) failed 00:27:48.903 Namespace ID:1 00:27:48.903 Command Set Identifier: NVM (00h) 00:27:48.903 Deallocate: Supported 00:27:48.903 Deallocated/Unwritten Error: Not Supported 00:27:48.903 Deallocated Read Value: Unknown 00:27:48.903 Deallocate in Write Zeroes: Not Supported 00:27:48.903 Deallocated Guard Field: 0xFFFF 00:27:48.903 Flush: Supported 00:27:48.903 Reservation: Not Supported 00:27:48.903 Namespace Sharing Capabilities: Multiple Controllers 00:27:48.903 Size (in LBAs): 1953525168 (931GiB) 00:27:48.903 Capacity (in LBAs): 1953525168 (931GiB) 00:27:48.903 Utilization (in LBAs): 1953525168 (931GiB) 00:27:48.903 UUID: 4e424416-13e4-4508-af99-38201d72cf89 00:27:48.903 Thin Provisioning: Not Supported 00:27:48.903 Per-NS Atomic Units: Yes 00:27:48.903 Atomic Boundary Size (Normal): 0 00:27:48.903 Atomic Boundary Size (PFail): 0 00:27:48.903 Atomic Boundary Offset: 0 00:27:48.903 NGUID/EUI64 Never Reused: No 00:27:48.903 ANA group ID: 1 00:27:48.903 Namespace Write Protected: No 00:27:48.903 Number of LBA Formats: 1 00:27:48.903 Current LBA Format: LBA Format #00 00:27:48.903 LBA Format #00: Data Size: 512 Metadata Size: 0 00:27:48.903 00:27:48.903 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:48.903 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:48.903 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:27:48.903 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:48.903 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:27:48.903 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:48.903 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:48.903 rmmod nvme_tcp 00:27:49.162 rmmod nvme_fabrics 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:49.162 06:38:20 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:27:51.064 06:38:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:52.477 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:52.477 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:52.477 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:52.477 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:52.477 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:52.477 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:52.477 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:52.477 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:52.477 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:52.477 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:52.477 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:52.477 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:52.477 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:52.477 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:52.477 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:52.477 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:53.414 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:27:53.414 00:27:53.414 real 0m9.728s 00:27:53.414 user 0m2.056s 00:27:53.414 sys 0m3.617s 00:27:53.414 06:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:27:53.414 06:38:25 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:27:53.414 ************************************ 00:27:53.414 END TEST nvmf_identify_kernel_target 00:27:53.414 ************************************ 00:27:53.414 06:38:25 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:53.414 06:38:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:27:53.414 06:38:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:27:53.414 06:38:25 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:53.414 ************************************ 00:27:53.414 START TEST nvmf_auth_host 00:27:53.414 ************************************ 00:27:53.414 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:53.674 * Looking for test storage... 00:27:53.674 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lcov --version 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:27:53.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.674 --rc genhtml_branch_coverage=1 00:27:53.674 --rc genhtml_function_coverage=1 00:27:53.674 --rc genhtml_legend=1 00:27:53.674 --rc geninfo_all_blocks=1 00:27:53.674 --rc geninfo_unexecuted_blocks=1 00:27:53.674 00:27:53.674 ' 00:27:53.674 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:27:53.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.674 --rc genhtml_branch_coverage=1 00:27:53.674 --rc genhtml_function_coverage=1 00:27:53.675 --rc genhtml_legend=1 00:27:53.675 --rc geninfo_all_blocks=1 00:27:53.675 --rc geninfo_unexecuted_blocks=1 00:27:53.675 00:27:53.675 ' 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:27:53.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.675 --rc genhtml_branch_coverage=1 00:27:53.675 --rc genhtml_function_coverage=1 00:27:53.675 --rc genhtml_legend=1 00:27:53.675 --rc geninfo_all_blocks=1 00:27:53.675 --rc geninfo_unexecuted_blocks=1 00:27:53.675 00:27:53.675 ' 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:27:53.675 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:53.675 --rc genhtml_branch_coverage=1 00:27:53.675 --rc genhtml_function_coverage=1 00:27:53.675 --rc genhtml_legend=1 00:27:53.675 --rc geninfo_all_blocks=1 00:27:53.675 --rc geninfo_unexecuted_blocks=1 00:27:53.675 00:27:53.675 ' 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:53.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:27:53.675 06:38:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:27:55.579 Found 0000:09:00.0 (0x8086 - 0x159b) 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.579 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:27:55.580 Found 0000:09:00.1 (0x8086 - 0x159b) 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:27:55.580 Found net devices under 0000:09:00.0: cvl_0_0 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:27:55.580 Found net devices under 0000:09:00.1: cvl_0_1 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:27:55.580 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:27:55.838 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:55.838 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:27:55.838 00:27:55.838 --- 10.0.0.2 ping statistics --- 00:27:55.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.838 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:55.838 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:55.838 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.070 ms 00:27:55.838 00:27:55.838 --- 10.0.0.1 ping statistics --- 00:27:55.838 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:55.838 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:27:55.838 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=2186017 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 2186017 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2186017 ']' 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:55.839 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=c1a5f6f09ebcab825d57ce9405648038 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.YYr 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key c1a5f6f09ebcab825d57ce9405648038 0 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 c1a5f6f09ebcab825d57ce9405648038 0 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=c1a5f6f09ebcab825d57ce9405648038 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.YYr 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.YYr 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.YYr 00:27:56.096 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:27:56.097 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:56.097 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:56.097 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:56.097 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:56.097 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:56.097 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=55e967f19a3e7c7d618bc713b1c0568137a5d3fabdda8fdd146689371c5cd717 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.QFH 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 55e967f19a3e7c7d618bc713b1c0568137a5d3fabdda8fdd146689371c5cd717 3 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 55e967f19a3e7c7d618bc713b1c0568137a5d3fabdda8fdd146689371c5cd717 3 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=55e967f19a3e7c7d618bc713b1c0568137a5d3fabdda8fdd146689371c5cd717 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.QFH 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.QFH 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.QFH 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ca877212357b33a143bc2680dee363f4933d1806125e3987 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.DQl 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ca877212357b33a143bc2680dee363f4933d1806125e3987 0 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ca877212357b33a143bc2680dee363f4933d1806125e3987 0 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ca877212357b33a143bc2680dee363f4933d1806125e3987 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:56.355 06:38:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.DQl 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.DQl 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.DQl 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=1d98d57dad8369e152bf7369fc2f37d7d09559a89636de44 00:27:56.355 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.IcW 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 1d98d57dad8369e152bf7369fc2f37d7d09559a89636de44 2 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 1d98d57dad8369e152bf7369fc2f37d7d09559a89636de44 2 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=1d98d57dad8369e152bf7369fc2f37d7d09559a89636de44 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.IcW 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.IcW 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.IcW 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=cce055a1405396161d54c68ee8ae8680 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.jR1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key cce055a1405396161d54c68ee8ae8680 1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 cce055a1405396161d54c68ee8ae8680 1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=cce055a1405396161d54c68ee8ae8680 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.jR1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.jR1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.jR1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=ae641d02f07337a8a888182585fc2a2b 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.6BU 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key ae641d02f07337a8a888182585fc2a2b 1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 ae641d02f07337a8a888182585fc2a2b 1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=ae641d02f07337a8a888182585fc2a2b 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:27:56.356 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.6BU 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.6BU 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.6BU 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=576affd1abd1f5d05123b2293c93cf412928df80844f0435 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.6ez 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 576affd1abd1f5d05123b2293c93cf412928df80844f0435 2 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 576affd1abd1f5d05123b2293c93cf412928df80844f0435 2 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=576affd1abd1f5d05123b2293c93cf412928df80844f0435 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.6ez 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.6ez 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.6ez 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=4f26602e4f664d50846db712312d0063 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.pRr 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 4f26602e4f664d50846db712312d0063 0 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 4f26602e4f664d50846db712312d0063 0 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=4f26602e4f664d50846db712312d0063 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.pRr 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.pRr 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.pRr 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7b58843e4206a5b1979d3b11b8b449df72144f35fbc7f265fbe126386ad6c7ce 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.GIQ 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7b58843e4206a5b1979d3b11b8b449df72144f35fbc7f265fbe126386ad6c7ce 3 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7b58843e4206a5b1979d3b11b8b449df72144f35fbc7f265fbe126386ad6c7ce 3 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7b58843e4206a5b1979d3b11b8b449df72144f35fbc7f265fbe126386ad6c7ce 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.GIQ 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.GIQ 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.GIQ 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2186017 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # '[' -z 2186017 ']' 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # local max_retries=100 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:56.615 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # xtrace_disable 00:27:56.615 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@866 -- # return 0 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.YYr 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.QFH ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.QFH 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.DQl 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.IcW ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IcW 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.jR1 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.6BU ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.6BU 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.6ez 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.pRr ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.pRr 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.GIQ 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:56.874 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:27:56.875 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:56.875 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:56.875 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:56.875 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:27:56.875 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:27:56.875 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:27:57.131 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:57.131 06:38:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:58.063 Waiting for block devices as requested 00:27:58.063 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:58.320 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:58.320 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:58.320 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:58.577 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:58.577 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:58.577 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:58.577 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:58.835 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:27:58.835 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:58.835 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:59.093 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:59.093 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:59.093 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:59.093 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:59.351 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:59.351 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:59.609 No valid GPT data, bailing 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:27:59.609 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:59.867 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:27:59.868 00:27:59.868 Discovery Log Number of Records 2, Generation counter 2 00:27:59.868 =====Discovery Log Entry 0====== 00:27:59.868 trtype: tcp 00:27:59.868 adrfam: ipv4 00:27:59.868 subtype: current discovery subsystem 00:27:59.868 treq: not specified, sq flow control disable supported 00:27:59.868 portid: 1 00:27:59.868 trsvcid: 4420 00:27:59.868 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:59.868 traddr: 10.0.0.1 00:27:59.868 eflags: none 00:27:59.868 sectype: none 00:27:59.868 =====Discovery Log Entry 1====== 00:27:59.868 trtype: tcp 00:27:59.868 adrfam: ipv4 00:27:59.868 subtype: nvme subsystem 00:27:59.868 treq: not specified, sq flow control disable supported 00:27:59.868 portid: 1 00:27:59.868 trsvcid: 4420 00:27:59.868 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:59.868 traddr: 10.0.0.1 00:27:59.868 eflags: none 00:27:59.868 sectype: none 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:59.868 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.125 nvme0n1 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:00.125 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.126 nvme0n1 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.126 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.384 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.384 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.384 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.384 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.384 06:38:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.384 nvme0n1 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.384 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.642 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.642 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.642 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.642 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.642 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.642 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.642 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:00.642 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.642 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.643 nvme0n1 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.643 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.902 nvme0n1 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:00.902 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.161 nvme0n1 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:01.161 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.162 06:38:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.420 nvme0n1 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.420 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.421 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.679 nvme0n1 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.679 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.680 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:01.680 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.680 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.938 nvme0n1 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:01.938 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.196 nvme0n1 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.196 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.197 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:02.197 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.197 06:38:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.455 nvme0n1 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.455 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.713 nvme0n1 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:02.713 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:02.714 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:02.971 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:02.971 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:02.972 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.230 nvme0n1 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.230 06:38:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.488 nvme0n1 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.488 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.747 nvme0n1 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:03.747 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.005 nvme0n1 00:28:04.005 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.005 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.005 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.005 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.005 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.005 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.263 06:38:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.829 nvme0n1 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:04.829 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:04.830 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.395 nvme0n1 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.395 06:38:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.395 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.653 nvme0n1 00:28:05.653 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.653 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:05.653 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.653 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.653 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:05.911 06:38:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.477 nvme0n1 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:06.477 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.044 nvme0n1 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.044 06:38:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.977 nvme0n1 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:07.977 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:07.978 06:38:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.912 nvme0n1 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:08.912 06:38:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.846 nvme0n1 00:28:09.846 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.846 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.846 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:09.846 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.846 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:09.847 06:38:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.781 nvme0n1 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:10.781 06:38:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.715 nvme0n1 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.715 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.973 nvme0n1 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:11.973 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.974 nvme0n1 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:11.974 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:12.232 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.233 nvme0n1 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.233 06:38:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.233 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.491 nvme0n1 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:12.491 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.492 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.750 nvme0n1 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.750 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:12.751 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.009 nvme0n1 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.009 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.010 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.010 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.010 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.010 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.010 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.010 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.010 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.010 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:13.010 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.010 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.268 nvme0n1 00:28:13.268 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.268 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.268 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.268 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.268 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.268 06:38:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.268 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.526 nvme0n1 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.526 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.784 nvme0n1 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.784 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:13.785 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.043 nvme0n1 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.043 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.044 06:38:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.302 nvme0n1 00:28:14.302 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.302 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.302 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.302 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.302 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.302 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.302 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.302 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.302 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.302 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.561 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.820 nvme0n1 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:14.820 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.079 nvme0n1 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.079 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.080 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:15.080 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.080 06:38:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.339 nvme0n1 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.339 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.598 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.881 nvme0n1 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:15.881 06:38:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.238 nvme0n1 00:28:16.238 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.238 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.238 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.238 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.238 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.238 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.497 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:16.755 nvme0n1 00:28:16.755 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:16.755 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.755 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:16.755 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:16.755 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.013 06:38:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.579 nvme0n1 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:17.579 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:17.580 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.145 nvme0n1 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.145 06:38:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.403 nvme0n1 00:28:18.403 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.403 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.403 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.403 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.403 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:18.661 06:38:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.595 nvme0n1 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:19.595 06:38:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.527 nvme0n1 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:20.527 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:20.528 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.461 nvme0n1 00:28:21.461 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.461 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.461 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.461 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.461 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:21.461 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.461 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.461 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.461 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.461 06:38:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:21.461 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.393 nvme0n1 00:28:22.393 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.393 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.393 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.393 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.393 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:22.393 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.393 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.393 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.393 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.393 06:38:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:22.393 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.328 nvme0n1 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.328 06:38:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.328 nvme0n1 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:23.328 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.329 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.588 nvme0n1 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.588 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.847 nvme0n1 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:23.847 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.105 nvme0n1 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:24.105 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.106 06:38:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.364 nvme0n1 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:24.364 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.365 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.623 nvme0n1 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.623 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.624 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.881 nvme0n1 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:24.881 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:24.882 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.140 nvme0n1 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.140 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.141 06:38:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.399 nvme0n1 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.399 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.657 nvme0n1 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:25.657 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.658 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.916 nvme0n1 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:25.916 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:25.917 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.175 nvme0n1 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.175 06:38:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.175 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.175 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.175 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.175 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.175 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.175 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.176 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.176 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.176 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.176 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.176 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.176 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.176 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:26.176 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.176 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.740 nvme0n1 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.740 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.999 nvme0n1 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:26.999 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.257 nvme0n1 00:28:27.257 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.258 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.258 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.258 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.258 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.258 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.258 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.258 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.258 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.258 06:38:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.258 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.836 nvme0n1 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:27.836 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:27.837 06:38:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.404 nvme0n1 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.404 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.971 nvme0n1 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:28.971 06:39:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.592 nvme0n1 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:29.592 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.158 nvme0n1 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFhNWY2ZjA5ZWJjYWI4MjVkNTdjZTk0MDU2NDgwMzjLEDwY: 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: ]] 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTVlOTY3ZjE5YTNlN2M3ZDYxOGJjNzEzYjFjMDU2ODEzN2E1ZDNmYWJkZGE4ZmRkMTQ2Njg5MzcxYzVjZDcxN8WkG6k=: 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:30.158 06:39:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.091 nvme0n1 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:31.091 06:39:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.024 nvme0n1 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.024 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.025 06:39:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.958 nvme0n1 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NTc2YWZmZDFhYmQxZjVkMDUxMjNiMjI5M2M5M2NmNDEyOTI4ZGY4MDg0NGYwNDM1SOS0nA==: 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: ]] 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NGYyNjYwMmU0ZjY2NGQ1MDg0NmRiNzEyMzEyZDAwNjPbSrjc: 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:32.958 06:39:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.891 nvme0n1 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2I1ODg0M2U0MjA2YTViMTk3OWQzYjExYjhiNDQ5ZGY3MjE0NGYzNWZiYzdmMjY1ZmJlMTI2Mzg2YWQ2YzdjZQwpQtE=: 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:33.891 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:33.892 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:33.892 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:33.892 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:33.892 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:33.892 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:33.892 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:33.892 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:33.892 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.892 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:33.892 06:39:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.825 nvme0n1 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:34.825 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.826 request: 00:28:34.826 { 00:28:34.826 "name": "nvme0", 00:28:34.826 "trtype": "tcp", 00:28:34.826 "traddr": "10.0.0.1", 00:28:34.826 "adrfam": "ipv4", 00:28:34.826 "trsvcid": "4420", 00:28:34.826 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:34.826 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:34.826 "prchk_reftag": false, 00:28:34.826 "prchk_guard": false, 00:28:34.826 "hdgst": false, 00:28:34.826 "ddgst": false, 00:28:34.826 "allow_unrecognized_csi": false, 00:28:34.826 "method": "bdev_nvme_attach_controller", 00:28:34.826 "req_id": 1 00:28:34.826 } 00:28:34.826 Got JSON-RPC error response 00:28:34.826 response: 00:28:34.826 { 00:28:34.826 "code": -5, 00:28:34.826 "message": "Input/output error" 00:28:34.826 } 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:34.826 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.084 request: 00:28:35.084 { 00:28:35.084 "name": "nvme0", 00:28:35.084 "trtype": "tcp", 00:28:35.084 "traddr": "10.0.0.1", 00:28:35.084 "adrfam": "ipv4", 00:28:35.084 "trsvcid": "4420", 00:28:35.084 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:35.084 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:35.084 "prchk_reftag": false, 00:28:35.084 "prchk_guard": false, 00:28:35.084 "hdgst": false, 00:28:35.084 "ddgst": false, 00:28:35.084 "dhchap_key": "key2", 00:28:35.084 "allow_unrecognized_csi": false, 00:28:35.084 "method": "bdev_nvme_attach_controller", 00:28:35.084 "req_id": 1 00:28:35.084 } 00:28:35.084 Got JSON-RPC error response 00:28:35.084 response: 00:28:35.084 { 00:28:35.084 "code": -5, 00:28:35.084 "message": "Input/output error" 00:28:35.084 } 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.084 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.085 request: 00:28:35.085 { 00:28:35.085 "name": "nvme0", 00:28:35.085 "trtype": "tcp", 00:28:35.085 "traddr": "10.0.0.1", 00:28:35.085 "adrfam": "ipv4", 00:28:35.085 "trsvcid": "4420", 00:28:35.085 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:35.085 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:35.085 "prchk_reftag": false, 00:28:35.085 "prchk_guard": false, 00:28:35.085 "hdgst": false, 00:28:35.085 "ddgst": false, 00:28:35.085 "dhchap_key": "key1", 00:28:35.085 "dhchap_ctrlr_key": "ckey2", 00:28:35.085 "allow_unrecognized_csi": false, 00:28:35.085 "method": "bdev_nvme_attach_controller", 00:28:35.085 "req_id": 1 00:28:35.085 } 00:28:35.085 Got JSON-RPC error response 00:28:35.085 response: 00:28:35.085 { 00:28:35.085 "code": -5, 00:28:35.085 "message": "Input/output error" 00:28:35.085 } 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.085 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.343 nvme0n1 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.343 06:39:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.343 request: 00:28:35.343 { 00:28:35.343 "name": "nvme0", 00:28:35.343 "dhchap_key": "key1", 00:28:35.343 "dhchap_ctrlr_key": "ckey2", 00:28:35.343 "method": "bdev_nvme_set_keys", 00:28:35.343 "req_id": 1 00:28:35.343 } 00:28:35.343 Got JSON-RPC error response 00:28:35.343 response: 00:28:35.343 { 00:28:35.343 "code": -13, 00:28:35.343 "message": "Permission denied" 00:28:35.343 } 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.343 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:35.601 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:35.601 06:39:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:36.534 06:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.534 06:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:36.534 06:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:36.534 06:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.534 06:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:36.534 06:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:36.534 06:39:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:37.484 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:Y2E4NzcyMTIzNTdiMzNhMTQzYmMyNjgwZGVlMzYzZjQ5MzNkMTgwNjEyNWUzOTg3XWEwsA==: 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: ]] 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MWQ5OGQ1N2RhZDgzNjllMTUyYmY3MzY5ZmMyZjM3ZDdkMDk1NTlhODk2MzZkZTQ0vQrjYA==: 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:28:37.485 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.486 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.751 nvme0n1 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:Y2NlMDU1YTE0MDUzOTYxNjFkNTRjNjhlZThhZTg2ODAN/Upw: 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: ]] 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YWU2NDFkMDJmMDczMzdhOGE4ODgxODI1ODVmYzJhMmLtiK5L: 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.751 request: 00:28:37.751 { 00:28:37.751 "name": "nvme0", 00:28:37.751 "dhchap_key": "key2", 00:28:37.751 "dhchap_ctrlr_key": "ckey1", 00:28:37.751 "method": "bdev_nvme_set_keys", 00:28:37.751 "req_id": 1 00:28:37.751 } 00:28:37.751 Got JSON-RPC error response 00:28:37.751 response: 00:28:37.751 { 00:28:37.751 "code": -13, 00:28:37.751 "message": "Permission denied" 00:28:37.751 } 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:37.751 06:39:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:39.123 06:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.123 06:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:39.123 06:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:39.123 06:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.123 06:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:39.123 06:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:39.123 06:39:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:40.057 rmmod nvme_tcp 00:28:40.057 rmmod nvme_fabrics 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 2186017 ']' 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 2186017 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # '[' -z 2186017 ']' 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # kill -0 2186017 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # uname 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2186017 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2186017' 00:28:40.057 killing process with pid 2186017 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@971 -- # kill 2186017 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@976 -- # wait 2186017 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:40.057 06:39:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:28:42.667 06:39:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:43.604 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:43.605 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:43.605 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:43.605 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:43.605 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:43.605 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:43.605 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:43.605 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:43.605 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:28:43.605 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:28:43.605 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:28:43.605 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:28:43.605 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:28:43.605 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:28:43.863 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:28:43.863 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:28:44.800 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:28:44.800 06:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.YYr /tmp/spdk.key-null.DQl /tmp/spdk.key-sha256.jR1 /tmp/spdk.key-sha384.6ez /tmp/spdk.key-sha512.GIQ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:28:44.800 06:39:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:46.177 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:46.177 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:46.177 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:46.177 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:46.177 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:46.177 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:46.177 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:46.177 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:46.177 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:28:46.177 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:46.177 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:28:46.177 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:28:46.177 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:28:46.177 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:28:46.177 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:28:46.177 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:28:46.177 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:28:46.177 00:28:46.177 real 0m52.613s 00:28:46.177 user 0m50.228s 00:28:46.177 sys 0m6.192s 00:28:46.177 06:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:28:46.177 06:39:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.177 ************************************ 00:28:46.177 END TEST nvmf_auth_host 00:28:46.177 ************************************ 00:28:46.177 06:39:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:28:46.177 06:39:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:46.177 06:39:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:28:46.177 06:39:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:46.177 06:39:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.177 ************************************ 00:28:46.177 START TEST nvmf_digest 00:28:46.178 ************************************ 00:28:46.178 06:39:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:46.178 * Looking for test storage... 00:28:46.178 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:46.178 06:39:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:28:46.178 06:39:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lcov --version 00:28:46.178 06:39:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:46.178 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:28:46.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.437 --rc genhtml_branch_coverage=1 00:28:46.437 --rc genhtml_function_coverage=1 00:28:46.437 --rc genhtml_legend=1 00:28:46.437 --rc geninfo_all_blocks=1 00:28:46.437 --rc geninfo_unexecuted_blocks=1 00:28:46.437 00:28:46.437 ' 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:28:46.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.437 --rc genhtml_branch_coverage=1 00:28:46.437 --rc genhtml_function_coverage=1 00:28:46.437 --rc genhtml_legend=1 00:28:46.437 --rc geninfo_all_blocks=1 00:28:46.437 --rc geninfo_unexecuted_blocks=1 00:28:46.437 00:28:46.437 ' 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:28:46.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.437 --rc genhtml_branch_coverage=1 00:28:46.437 --rc genhtml_function_coverage=1 00:28:46.437 --rc genhtml_legend=1 00:28:46.437 --rc geninfo_all_blocks=1 00:28:46.437 --rc geninfo_unexecuted_blocks=1 00:28:46.437 00:28:46.437 ' 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:28:46.437 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:46.437 --rc genhtml_branch_coverage=1 00:28:46.437 --rc genhtml_function_coverage=1 00:28:46.437 --rc genhtml_legend=1 00:28:46.437 --rc geninfo_all_blocks=1 00:28:46.437 --rc geninfo_unexecuted_blocks=1 00:28:46.437 00:28:46.437 ' 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:46.437 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:46.438 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:28:46.438 06:39:18 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:28:48.972 Found 0000:09:00.0 (0x8086 - 0x159b) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:28:48.972 Found 0000:09:00.1 (0x8086 - 0x159b) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:28:48.972 Found net devices under 0000:09:00.0: cvl_0_0 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:28:48.972 Found net devices under 0000:09:00.1: cvl_0_1 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:48.972 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:48.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:48.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.287 ms 00:28:48.973 00:28:48.973 --- 10.0.0.2 ping statistics --- 00:28:48.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.973 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:48.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:48.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.128 ms 00:28:48.973 00:28:48.973 --- 10.0.0.1 ping statistics --- 00:28:48.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:48.973 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:28:48.973 ************************************ 00:28:48.973 START TEST nvmf_digest_clean 00:28:48.973 ************************************ 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1127 -- # run_digest 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=2196399 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 2196399 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2196399 ']' 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:48.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:48.973 [2024-11-20 06:39:20.531680] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:28:48.973 [2024-11-20 06:39:20.531772] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:48.973 [2024-11-20 06:39:20.606497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:48.973 [2024-11-20 06:39:20.662890] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:48.973 [2024-11-20 06:39:20.662940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:48.973 [2024-11-20 06:39:20.662968] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:48.973 [2024-11-20 06:39:20.662979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:48.973 [2024-11-20 06:39:20.662988] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:48.973 [2024-11-20 06:39:20.663590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:28:48.973 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:49.232 null0 00:28:49.232 [2024-11-20 06:39:20.885335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:49.232 [2024-11-20 06:39:20.909526] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2196424 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2196424 /var/tmp/bperf.sock 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2196424 ']' 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:49.232 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:49.232 06:39:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:49.232 [2024-11-20 06:39:20.961736] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:28:49.232 [2024-11-20 06:39:20.961811] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196424 ] 00:28:49.232 [2024-11-20 06:39:21.031882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:49.489 [2024-11-20 06:39:21.094139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:49.489 06:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:49.489 06:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:49.489 06:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:49.489 06:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:49.489 06:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:49.747 06:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:49.748 06:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:50.313 nvme0n1 00:28:50.313 06:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:50.313 06:39:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:50.313 Running I/O for 2 seconds... 00:28:52.179 18520.00 IOPS, 72.34 MiB/s [2024-11-20T05:39:24.015Z] 18668.50 IOPS, 72.92 MiB/s 00:28:52.179 Latency(us) 00:28:52.179 [2024-11-20T05:39:24.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.179 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:52.179 nvme0n1 : 2.00 18690.25 73.01 0.00 0.00 6841.51 3495.25 15146.10 00:28:52.179 [2024-11-20T05:39:24.015Z] =================================================================================================================== 00:28:52.179 [2024-11-20T05:39:24.015Z] Total : 18690.25 73.01 0.00 0.00 6841.51 3495.25 15146.10 00:28:52.179 { 00:28:52.179 "results": [ 00:28:52.179 { 00:28:52.179 "job": "nvme0n1", 00:28:52.179 "core_mask": "0x2", 00:28:52.179 "workload": "randread", 00:28:52.179 "status": "finished", 00:28:52.179 "queue_depth": 128, 00:28:52.179 "io_size": 4096, 00:28:52.179 "runtime": 2.004521, 00:28:52.179 "iops": 18690.250688319054, 00:28:52.179 "mibps": 73.0087917512463, 00:28:52.179 "io_failed": 0, 00:28:52.179 "io_timeout": 0, 00:28:52.179 "avg_latency_us": 6841.5067023740685, 00:28:52.179 "min_latency_us": 3495.2533333333336, 00:28:52.179 "max_latency_us": 15146.097777777777 00:28:52.179 } 00:28:52.179 ], 00:28:52.179 "core_count": 1 00:28:52.179 } 00:28:52.436 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:52.436 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:52.436 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:52.437 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:52.437 | select(.opcode=="crc32c") 00:28:52.437 | "\(.module_name) \(.executed)"' 00:28:52.437 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2196424 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2196424 ']' 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2196424 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2196424 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2196424' 00:28:52.695 killing process with pid 2196424 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2196424 00:28:52.695 Received shutdown signal, test time was about 2.000000 seconds 00:28:52.695 00:28:52.695 Latency(us) 00:28:52.695 [2024-11-20T05:39:24.531Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:52.695 [2024-11-20T05:39:24.531Z] =================================================================================================================== 00:28:52.695 [2024-11-20T05:39:24.531Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:52.695 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2196424 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2196949 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2196949 /var/tmp/bperf.sock 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2196949 ']' 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:52.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:52.953 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:52.953 [2024-11-20 06:39:24.589068] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:28:52.953 [2024-11-20 06:39:24.589154] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2196949 ] 00:28:52.953 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:52.953 Zero copy mechanism will not be used. 00:28:52.953 [2024-11-20 06:39:24.656669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.953 [2024-11-20 06:39:24.715731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.211 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:53.211 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:53.211 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:53.211 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:53.211 06:39:24 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:53.469 06:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.469 06:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:54.034 nvme0n1 00:28:54.034 06:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:54.034 06:39:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:54.034 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:54.034 Zero copy mechanism will not be used. 00:28:54.034 Running I/O for 2 seconds... 00:28:56.340 5738.00 IOPS, 717.25 MiB/s [2024-11-20T05:39:28.176Z] 5622.00 IOPS, 702.75 MiB/s 00:28:56.340 Latency(us) 00:28:56.340 [2024-11-20T05:39:28.176Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.340 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:56.340 nvme0n1 : 2.00 5622.64 702.83 0.00 0.00 2841.36 676.60 4878.79 00:28:56.340 [2024-11-20T05:39:28.176Z] =================================================================================================================== 00:28:56.340 [2024-11-20T05:39:28.176Z] Total : 5622.64 702.83 0.00 0.00 2841.36 676.60 4878.79 00:28:56.340 { 00:28:56.340 "results": [ 00:28:56.340 { 00:28:56.340 "job": "nvme0n1", 00:28:56.340 "core_mask": "0x2", 00:28:56.340 "workload": "randread", 00:28:56.340 "status": "finished", 00:28:56.340 "queue_depth": 16, 00:28:56.340 "io_size": 131072, 00:28:56.340 "runtime": 2.002617, 00:28:56.340 "iops": 5622.642771932926, 00:28:56.340 "mibps": 702.8303464916157, 00:28:56.340 "io_failed": 0, 00:28:56.340 "io_timeout": 0, 00:28:56.340 "avg_latency_us": 2841.3636197618575, 00:28:56.340 "min_latency_us": 676.5985185185185, 00:28:56.340 "max_latency_us": 4878.791111111111 00:28:56.340 } 00:28:56.340 ], 00:28:56.340 "core_count": 1 00:28:56.340 } 00:28:56.340 06:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:56.340 06:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:28:56.340 06:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:56.340 06:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:56.340 | select(.opcode=="crc32c") 00:28:56.340 | "\(.module_name) \(.executed)"' 00:28:56.341 06:39:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2196949 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2196949 ']' 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2196949 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2196949 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2196949' 00:28:56.341 killing process with pid 2196949 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2196949 00:28:56.341 Received shutdown signal, test time was about 2.000000 seconds 00:28:56.341 00:28:56.341 Latency(us) 00:28:56.341 [2024-11-20T05:39:28.177Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.341 [2024-11-20T05:39:28.177Z] =================================================================================================================== 00:28:56.341 [2024-11-20T05:39:28.177Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.341 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2196949 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2197358 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2197358 /var/tmp/bperf.sock 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2197358 ']' 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:28:56.599 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:28:56.599 [2024-11-20 06:39:28.432387] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:28:56.599 [2024-11-20 06:39:28.432474] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197358 ] 00:28:56.857 [2024-11-20 06:39:28.498663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.857 [2024-11-20 06:39:28.557223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.857 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:28:56.857 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:28:56.857 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:28:56.857 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:56.857 06:39:28 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:57.423 06:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.423 06:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.988 nvme0n1 00:28:57.988 06:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:57.988 06:39:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:57.988 Running I/O for 2 seconds... 00:28:59.853 18970.00 IOPS, 74.10 MiB/s [2024-11-20T05:39:31.689Z] 18557.00 IOPS, 72.49 MiB/s 00:28:59.853 Latency(us) 00:28:59.853 [2024-11-20T05:39:31.689Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.853 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:59.853 nvme0n1 : 2.01 18556.61 72.49 0.00 0.00 6882.27 2730.67 13204.29 00:28:59.853 [2024-11-20T05:39:31.689Z] =================================================================================================================== 00:28:59.853 [2024-11-20T05:39:31.689Z] Total : 18556.61 72.49 0.00 0.00 6882.27 2730.67 13204.29 00:28:59.853 { 00:28:59.853 "results": [ 00:28:59.853 { 00:28:59.853 "job": "nvme0n1", 00:28:59.853 "core_mask": "0x2", 00:28:59.853 "workload": "randwrite", 00:28:59.853 "status": "finished", 00:28:59.853 "queue_depth": 128, 00:28:59.853 "io_size": 4096, 00:28:59.853 "runtime": 2.00694, 00:28:59.853 "iops": 18556.60856826811, 00:28:59.853 "mibps": 72.48675221979731, 00:28:59.853 "io_failed": 0, 00:28:59.853 "io_timeout": 0, 00:28:59.853 "avg_latency_us": 6882.273923626649, 00:28:59.853 "min_latency_us": 2730.6666666666665, 00:28:59.853 "max_latency_us": 13204.29037037037 00:28:59.853 } 00:28:59.853 ], 00:28:59.853 "core_count": 1 00:28:59.853 } 00:28:59.853 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:00.112 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:00.112 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:00.112 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:00.112 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:00.112 | select(.opcode=="crc32c") 00:29:00.112 | "\(.module_name) \(.executed)"' 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2197358 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2197358 ']' 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2197358 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2197358 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2197358' 00:29:00.370 killing process with pid 2197358 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2197358 00:29:00.370 Received shutdown signal, test time was about 2.000000 seconds 00:29:00.370 00:29:00.370 Latency(us) 00:29:00.370 [2024-11-20T05:39:32.206Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.370 [2024-11-20T05:39:32.206Z] =================================================================================================================== 00:29:00.370 [2024-11-20T05:39:32.206Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.370 06:39:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2197358 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2197768 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2197768 /var/tmp/bperf.sock 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # '[' -z 2197768 ']' 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:00.628 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:00.628 [2024-11-20 06:39:32.278568] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:00.628 [2024-11-20 06:39:32.278664] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2197768 ] 00:29:00.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:00.628 Zero copy mechanism will not be used. 00:29:00.628 [2024-11-20 06:39:32.344919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:00.628 [2024-11-20 06:39:32.403826] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.886 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:00.886 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@866 -- # return 0 00:29:00.886 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:00.886 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:00.886 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:01.144 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.144 06:39:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:01.709 nvme0n1 00:29:01.709 06:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:01.709 06:39:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:01.709 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:01.709 Zero copy mechanism will not be used. 00:29:01.709 Running I/O for 2 seconds... 00:29:04.010 5696.00 IOPS, 712.00 MiB/s [2024-11-20T05:39:35.846Z] 5967.50 IOPS, 745.94 MiB/s 00:29:04.010 Latency(us) 00:29:04.010 [2024-11-20T05:39:35.846Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.010 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:04.010 nvme0n1 : 2.00 5965.07 745.63 0.00 0.00 2675.44 1771.90 4466.16 00:29:04.010 [2024-11-20T05:39:35.846Z] =================================================================================================================== 00:29:04.010 [2024-11-20T05:39:35.846Z] Total : 5965.07 745.63 0.00 0.00 2675.44 1771.90 4466.16 00:29:04.010 { 00:29:04.010 "results": [ 00:29:04.010 { 00:29:04.010 "job": "nvme0n1", 00:29:04.010 "core_mask": "0x2", 00:29:04.010 "workload": "randwrite", 00:29:04.010 "status": "finished", 00:29:04.010 "queue_depth": 16, 00:29:04.010 "io_size": 131072, 00:29:04.010 "runtime": 2.004001, 00:29:04.010 "iops": 5965.06688369916, 00:29:04.010 "mibps": 745.633360462395, 00:29:04.010 "io_failed": 0, 00:29:04.010 "io_timeout": 0, 00:29:04.010 "avg_latency_us": 2675.4355538205095, 00:29:04.010 "min_latency_us": 1771.8992592592592, 00:29:04.010 "max_latency_us": 4466.157037037037 00:29:04.010 } 00:29:04.010 ], 00:29:04.010 "core_count": 1 00:29:04.010 } 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:04.010 | select(.opcode=="crc32c") 00:29:04.010 | "\(.module_name) \(.executed)"' 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2197768 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2197768 ']' 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2197768 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:04.010 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2197768 00:29:04.011 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:04.011 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:04.011 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2197768' 00:29:04.011 killing process with pid 2197768 00:29:04.011 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2197768 00:29:04.011 Received shutdown signal, test time was about 2.000000 seconds 00:29:04.011 00:29:04.011 Latency(us) 00:29:04.011 [2024-11-20T05:39:35.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.011 [2024-11-20T05:39:35.847Z] =================================================================================================================== 00:29:04.011 [2024-11-20T05:39:35.847Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:04.011 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2197768 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2196399 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # '[' -z 2196399 ']' 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # kill -0 2196399 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # uname 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2196399 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2196399' 00:29:04.268 killing process with pid 2196399 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@971 -- # kill 2196399 00:29:04.268 06:39:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@976 -- # wait 2196399 00:29:04.525 00:29:04.525 real 0m15.731s 00:29:04.525 user 0m31.636s 00:29:04.525 sys 0m4.278s 00:29:04.525 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:04.525 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:04.525 ************************************ 00:29:04.525 END TEST nvmf_digest_clean 00:29:04.525 ************************************ 00:29:04.525 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:04.525 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:04.525 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:04.525 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:04.525 ************************************ 00:29:04.525 START TEST nvmf_digest_error 00:29:04.525 ************************************ 00:29:04.525 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1127 -- # run_digest_error 00:29:04.525 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=2198325 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 2198325 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2198325 ']' 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:04.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:04.526 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.526 [2024-11-20 06:39:36.310685] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:04.526 [2024-11-20 06:39:36.310757] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:04.786 [2024-11-20 06:39:36.380603] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:04.786 [2024-11-20 06:39:36.435936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:04.786 [2024-11-20 06:39:36.435986] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:04.786 [2024-11-20 06:39:36.436013] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:04.786 [2024-11-20 06:39:36.436024] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:04.786 [2024-11-20 06:39:36.436034] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:04.786 [2024-11-20 06:39:36.436606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:04.786 [2024-11-20 06:39:36.561336] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:04.786 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.057 null0 00:29:05.057 [2024-11-20 06:39:36.676703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:05.057 [2024-11-20 06:39:36.700926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:05.057 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.057 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:05.057 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:05.057 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:05.057 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:05.057 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:05.057 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2198352 00:29:05.058 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:05.058 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2198352 /var/tmp/bperf.sock 00:29:05.058 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2198352 ']' 00:29:05.058 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.058 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:05.058 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.058 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:05.058 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.058 [2024-11-20 06:39:36.748466] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:05.058 [2024-11-20 06:39:36.748542] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198352 ] 00:29:05.058 [2024-11-20 06:39:36.813192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.058 [2024-11-20 06:39:36.871023] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:05.315 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:05.315 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:05.315 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:05.315 06:39:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:05.571 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:05.571 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:05.571 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:05.571 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:05.571 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:05.571 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.136 nvme0n1 00:29:06.136 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:06.136 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:06.136 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:06.136 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:06.136 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:06.136 06:39:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:06.136 Running I/O for 2 seconds... 00:29:06.136 [2024-11-20 06:39:37.968529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.136 [2024-11-20 06:39:37.968579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.136 [2024-11-20 06:39:37.968599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.394 [2024-11-20 06:39:37.983074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.394 [2024-11-20 06:39:37.983130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:24911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.394 [2024-11-20 06:39:37.983162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.394 [2024-11-20 06:39:37.996998] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.394 [2024-11-20 06:39:37.997032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:9181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.394 [2024-11-20 06:39:37.997050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.394 [2024-11-20 06:39:38.008920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.394 [2024-11-20 06:39:38.008951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:12354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.394 [2024-11-20 06:39:38.008983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.394 [2024-11-20 06:39:38.021813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.394 [2024-11-20 06:39:38.021861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:8070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.394 [2024-11-20 06:39:38.021878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.394 [2024-11-20 06:39:38.034844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.394 [2024-11-20 06:39:38.034876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:7975 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.394 [2024-11-20 06:39:38.034894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.394 [2024-11-20 06:39:38.048781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.394 [2024-11-20 06:39:38.048815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:13460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.394 [2024-11-20 06:39:38.048832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.394 [2024-11-20 06:39:38.063158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.394 [2024-11-20 06:39:38.063191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:9679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.394 [2024-11-20 06:39:38.063208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.394 [2024-11-20 06:39:38.074973] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.394 [2024-11-20 06:39:38.075030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:19362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.075047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.087339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.087385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:20664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.087403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.100565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.100596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:15371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.100628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.113312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.113343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:11257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.113375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.127322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.127355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:11871 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.127372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.140952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.140983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:17715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.141014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.156437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.156471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.156488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.172810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.172841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.172872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.183780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.183810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:161 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.183842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.197271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.197330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:16503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.197348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.210922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.210954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:2972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.210979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.395 [2024-11-20 06:39:38.224885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.395 [2024-11-20 06:39:38.224927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.395 [2024-11-20 06:39:38.224947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.236334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.236366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:12411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.236396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.251727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.251759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:10514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.251775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.263073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.263103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.263134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.277012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.277042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:10323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.277074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.289437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.289471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:7983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.289489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.303141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.303186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.303204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.316459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.316492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.316525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.331390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.331427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.331460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.343622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.343669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.343686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.355415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.355446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.355478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.368587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.368620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12628 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.368653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.379846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.379876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:6455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.379908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.393514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.393545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:23080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.393578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.408804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.408835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:8926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.408866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.424735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.424766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:14930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.424797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.435606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.435653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:14193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.435669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.450886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.450917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:8931 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.450949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.467484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.467517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.467535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.652 [2024-11-20 06:39:38.482762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.652 [2024-11-20 06:39:38.482795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.652 [2024-11-20 06:39:38.482813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.495249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.495279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:9771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.495318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.508906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.508937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:17256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.508954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.525615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.525669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.525690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.537357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.537390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:19375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.537408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.550136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.550167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:14077 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.550199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.564605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.564650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.564688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.580633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.580666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:9969 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.580684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.595808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.595852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.595883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.607180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.607212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7469 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.607228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.622930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.622962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.622994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.637497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.637543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.637559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.653932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.653979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2271 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.653996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.669457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.669490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:25524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.669521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.680368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.680399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7582 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.680441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.694461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.694491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.694522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.710387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.710418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:17368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.710450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.726463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.726510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:22329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.726527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:06.914 [2024-11-20 06:39:38.741522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:06.914 [2024-11-20 06:39:38.741555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:17195 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:06.914 [2024-11-20 06:39:38.741573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.754639] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.754674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:22990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.754702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.766274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.766325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.766344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.778985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.779016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:13330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.779033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.792087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.792134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:14433 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.792152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.805796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.805842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.805866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.818511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.818546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:17557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.818575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.830137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.830169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:6217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.830186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.843095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.843127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:21605 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.843144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.854539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.854569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.854601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.870015] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.870047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:21537 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.870081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.886013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.886045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:14740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.886062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.899401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.899433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20364 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.899450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.913295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.913359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.913387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.925138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.925171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.925202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.939650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.939680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:22362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.939710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 18441.00 IOPS, 72.04 MiB/s [2024-11-20T05:39:39.009Z] [2024-11-20 06:39:38.956087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.956125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:24920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.956155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.968772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.173 [2024-11-20 06:39:38.968804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.173 [2024-11-20 06:39:38.968821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.173 [2024-11-20 06:39:38.980934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.174 [2024-11-20 06:39:38.980972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:24686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.174 [2024-11-20 06:39:38.981003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.174 [2024-11-20 06:39:38.995808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.174 [2024-11-20 06:39:38.995838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:21684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.174 [2024-11-20 06:39:38.995869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.431 [2024-11-20 06:39:39.012497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.431 [2024-11-20 06:39:39.012527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:8828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.431 [2024-11-20 06:39:39.012559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.431 [2024-11-20 06:39:39.025677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.431 [2024-11-20 06:39:39.025708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:14548 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.431 [2024-11-20 06:39:39.025725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.431 [2024-11-20 06:39:39.038272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.431 [2024-11-20 06:39:39.038309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:7701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.431 [2024-11-20 06:39:39.038343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.431 [2024-11-20 06:39:39.049682] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.431 [2024-11-20 06:39:39.049712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:23145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.431 [2024-11-20 06:39:39.049743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.431 [2024-11-20 06:39:39.062774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.431 [2024-11-20 06:39:39.062805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:19164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.431 [2024-11-20 06:39:39.062822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.431 [2024-11-20 06:39:39.075164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.431 [2024-11-20 06:39:39.075199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.431 [2024-11-20 06:39:39.075228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.431 [2024-11-20 06:39:39.087988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.431 [2024-11-20 06:39:39.088019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.431 [2024-11-20 06:39:39.088035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.431 [2024-11-20 06:39:39.100206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.431 [2024-11-20 06:39:39.100237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14576 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.100268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.115496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.115528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.115544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.126561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.126592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.126609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.142228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.142258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.142290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.154820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.154858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.154876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.167236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.167270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:20688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.167311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.178721] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.178757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.178777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.194605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.194653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:8295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.194669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.211000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.211030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:22722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.211061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.224296] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.224335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:26 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.224366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.240789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.240828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:4482 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.240845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.432 [2024-11-20 06:39:39.256221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.432 [2024-11-20 06:39:39.256254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1874 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.432 [2024-11-20 06:39:39.256271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.688 [2024-11-20 06:39:39.268037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.688 [2024-11-20 06:39:39.268072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.688 [2024-11-20 06:39:39.268089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.688 [2024-11-20 06:39:39.281914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.688 [2024-11-20 06:39:39.281961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:5646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.688 [2024-11-20 06:39:39.281978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.688 [2024-11-20 06:39:39.294403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.688 [2024-11-20 06:39:39.294434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.688 [2024-11-20 06:39:39.294467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.688 [2024-11-20 06:39:39.308764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.688 [2024-11-20 06:39:39.308795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:22604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.688 [2024-11-20 06:39:39.308826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.688 [2024-11-20 06:39:39.323907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.688 [2024-11-20 06:39:39.323937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.688 [2024-11-20 06:39:39.323967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.688 [2024-11-20 06:39:39.336728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.688 [2024-11-20 06:39:39.336759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:6272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.336775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.352040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.352086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.352102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.364188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.364233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.364249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.376713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.376743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:21596 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.376759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.389893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.389922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.389959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.403134] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.403185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:20571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.403210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.415086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.415130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.415157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.427402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.427432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:3102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.427464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.443548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.443579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.443595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.459096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.459125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:25048 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.459157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.475530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.475564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.475581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.487066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.487096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.487127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.502427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.502459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:8410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.502476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.689 [2024-11-20 06:39:39.517063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.689 [2024-11-20 06:39:39.517100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:14675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.689 [2024-11-20 06:39:39.517118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.529243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.529275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:25331 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.529314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.542558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.542592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.542627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.555103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.555133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:8518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.555163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.569741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.569771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:4569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.569788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.581488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.581518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:10495 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.581550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.595735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.595764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.595795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.609430] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.609476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.609495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.625260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.625313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:11051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.625332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.642202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.642233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:17390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.642265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.657028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.657058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:19110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.657089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.670355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.670388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:21854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.670406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.681805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.681836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:952 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.681869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.695467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.695516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:15499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.695533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.710313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.710367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.710391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.723912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.723947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.723965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.735999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.736031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.736049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.749583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.749630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.749654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.764375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.764406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7857 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.764439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.946 [2024-11-20 06:39:39.779055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:07.946 [2024-11-20 06:39:39.779098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:191 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.946 [2024-11-20 06:39:39.779121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.790790] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.790821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:10673 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.790852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.802391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.802423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:18158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.802456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.815735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.815766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:20528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.815782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.828262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.828315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:3001 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.828334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.840940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.840971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.841011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.855795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.855830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.855847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.868428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.868470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:3131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.868494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.879243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.879274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.879311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.895199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.895229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.895261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.910438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.910481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:24134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.910506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.923431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.923465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:21144 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.923482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 [2024-11-20 06:39:39.934731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.934761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.934792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 18600.00 IOPS, 72.66 MiB/s [2024-11-20T05:39:40.039Z] [2024-11-20 06:39:39.950651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ed0720) 00:29:08.203 [2024-11-20 06:39:39.950681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:15437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.203 [2024-11-20 06:39:39.950713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.203 00:29:08.203 Latency(us) 00:29:08.203 [2024-11-20T05:39:40.039Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.203 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:08.203 nvme0n1 : 2.04 18258.46 71.32 0.00 0.00 6869.11 3422.44 50098.63 00:29:08.203 [2024-11-20T05:39:40.039Z] =================================================================================================================== 00:29:08.203 [2024-11-20T05:39:40.039Z] Total : 18258.46 71.32 0.00 0.00 6869.11 3422.44 50098.63 00:29:08.203 { 00:29:08.203 "results": [ 00:29:08.203 { 00:29:08.203 "job": "nvme0n1", 00:29:08.203 "core_mask": "0x2", 00:29:08.203 "workload": "randread", 00:29:08.203 "status": "finished", 00:29:08.203 "queue_depth": 128, 00:29:08.203 "io_size": 4096, 00:29:08.203 "runtime": 2.044422, 00:29:08.203 "iops": 18258.461315716617, 00:29:08.203 "mibps": 71.32211451451803, 00:29:08.203 "io_failed": 0, 00:29:08.203 "io_timeout": 0, 00:29:08.203 "avg_latency_us": 6869.106314235368, 00:29:08.203 "min_latency_us": 3422.4355555555558, 00:29:08.203 "max_latency_us": 50098.63111111111 00:29:08.203 } 00:29:08.203 ], 00:29:08.203 "core_count": 1 00:29:08.203 } 00:29:08.203 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:08.203 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:08.203 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:08.203 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:08.203 | .driver_specific 00:29:08.203 | .nvme_error 00:29:08.203 | .status_code 00:29:08.203 | .command_transient_transport_error' 00:29:08.461 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 146 > 0 )) 00:29:08.461 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2198352 00:29:08.461 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2198352 ']' 00:29:08.461 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2198352 00:29:08.461 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:08.461 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:08.461 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2198352 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2198352' 00:29:08.735 killing process with pid 2198352 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2198352 00:29:08.735 Received shutdown signal, test time was about 2.000000 seconds 00:29:08.735 00:29:08.735 Latency(us) 00:29:08.735 [2024-11-20T05:39:40.571Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:08.735 [2024-11-20T05:39:40.571Z] =================================================================================================================== 00:29:08.735 [2024-11-20T05:39:40.571Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2198352 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2198885 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2198885 /var/tmp/bperf.sock 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2198885 ']' 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:08.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:08.735 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.006 [2024-11-20 06:39:40.601298] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:09.006 [2024-11-20 06:39:40.601408] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2198885 ] 00:29:09.006 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.006 Zero copy mechanism will not be used. 00:29:09.006 [2024-11-20 06:39:40.669307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:09.006 [2024-11-20 06:39:40.728040] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:09.006 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:09.006 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:09.006 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.006 06:39:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:09.571 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:09.571 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.571 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.571 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.571 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.571 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:09.829 nvme0n1 00:29:09.829 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:09.829 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:09.829 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:09.829 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:09.829 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:09.829 06:39:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:09.829 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:09.829 Zero copy mechanism will not be used. 00:29:09.829 Running I/O for 2 seconds... 00:29:09.829 [2024-11-20 06:39:41.621148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:09.829 [2024-11-20 06:39:41.621210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.829 [2024-11-20 06:39:41.621257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.829 [2024-11-20 06:39:41.625853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:09.829 [2024-11-20 06:39:41.625888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.829 [2024-11-20 06:39:41.625906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.829 [2024-11-20 06:39:41.631097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:09.829 [2024-11-20 06:39:41.631129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.829 [2024-11-20 06:39:41.631147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.829 [2024-11-20 06:39:41.636274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:09.829 [2024-11-20 06:39:41.636314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.829 [2024-11-20 06:39:41.636335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.829 [2024-11-20 06:39:41.641070] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:09.829 [2024-11-20 06:39:41.641100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.829 [2024-11-20 06:39:41.641117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:09.829 [2024-11-20 06:39:41.645924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:09.829 [2024-11-20 06:39:41.645954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.829 [2024-11-20 06:39:41.645972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:09.829 [2024-11-20 06:39:41.650783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:09.829 [2024-11-20 06:39:41.650813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.829 [2024-11-20 06:39:41.650830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:09.829 [2024-11-20 06:39:41.655827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:09.829 [2024-11-20 06:39:41.655858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.829 [2024-11-20 06:39:41.655876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:09.829 [2024-11-20 06:39:41.661337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:09.829 [2024-11-20 06:39:41.661373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.829 [2024-11-20 06:39:41.661391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.088 [2024-11-20 06:39:41.667621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.088 [2024-11-20 06:39:41.667660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.088 [2024-11-20 06:39:41.667678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.088 [2024-11-20 06:39:41.675408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.088 [2024-11-20 06:39:41.675440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.088 [2024-11-20 06:39:41.675458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.681543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.681574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.681592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.687475] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.687507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.687524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.693224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.693256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.693274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.698808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.698839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.698856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.704546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.704592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.704610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.711276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.711316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.711336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.716917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.716948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.716965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.722260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.722290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.722317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.725931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.725962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.725980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.730464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.730496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.730513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.734922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.734952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.734984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.740085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.740115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.740132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.746359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.746391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.746409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.752440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.752471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.752488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.758001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.758031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.758048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.763277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.763313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.763354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.768530] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.768561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.768593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.773401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.773432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.773449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.777893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.777923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.777940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.782455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.782485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.782502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.787124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.787171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.787188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.791679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.791723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.791739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.796284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.796337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.796369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.800828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.089 [2024-11-20 06:39:41.800858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.089 [2024-11-20 06:39:41.800874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.089 [2024-11-20 06:39:41.805492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.805523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.805540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.810083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.810114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.810130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.814738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.814769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.814785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.819409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.819439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.819455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.823829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.823859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.823893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.828293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.828330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.828347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.833005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.833050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.833067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.837632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.837663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.837680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.842176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.842206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.842229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.846717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.846746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.846762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.851366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.851395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.851411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.855996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.856026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.856042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.860457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.860487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.860503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.865102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.865131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.865148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.870109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.870139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.870156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.875114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.875160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.875177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.879887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.879918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.879934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.884378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.884413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.884445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.888950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.888979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.888995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.893440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.893469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.893486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.897921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.897950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.897965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.902551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.902581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.902597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.907142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.907171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.907203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.911845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.911874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.911905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.090 [2024-11-20 06:39:41.916494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.090 [2024-11-20 06:39:41.916523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.090 [2024-11-20 06:39:41.916540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.921242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.921272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.921290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.925782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.925826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.925842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.930413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.930444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.930461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.935011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.935055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.935071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.939540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.939571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.939588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.944118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.944148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.944164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.948661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.948690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.948721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.953959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.953990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.954008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.958355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.958386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.958403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.961294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.961343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.961367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.965847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.965876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.965907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.970365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.970394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.970412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.974880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.974908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.974939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.979521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.979550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.979584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.984082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.984111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.984127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.988656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.988685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.988701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.993343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.993374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.993391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:41.997919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:41.997947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:41.997977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:42.002441] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:42.002471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:42.002488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:42.006815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:42.006859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:42.006874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:42.011969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:42.011996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:42.012011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:42.015944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:42.015973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:42.015989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:42.020558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:42.020588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:42.020605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:42.025342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:42.025372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:42.025404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:42.030043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:42.030071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:42.030089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.350 [2024-11-20 06:39:42.035456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.350 [2024-11-20 06:39:42.035484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.350 [2024-11-20 06:39:42.035500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.039577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.039608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.039629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.044157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.044186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.044202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.048755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.048784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.048801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.054361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.054407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.054423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.059321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.059367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.059385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.065278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.065331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.065350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.072866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.072910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.072927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.079025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.079056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.079073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.085905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.085934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.085951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.091214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.091248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.091281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.096974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.097003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.097034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.102676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.102706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.102739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.109080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.109110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.109126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.114920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.114948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.114964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.120145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.120174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.120206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.126082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.126112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.126144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.132131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.132163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.132180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.138254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.138299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.138324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.144853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.144884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.144901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.149853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.149884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.149902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.154456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.154487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.154504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.159055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.159099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.159115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.163906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.163949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.163966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.169403] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.169433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.169449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.351 [2024-11-20 06:39:42.177004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.351 [2024-11-20 06:39:42.177039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.351 [2024-11-20 06:39:42.177056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.611 [2024-11-20 06:39:42.184123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.611 [2024-11-20 06:39:42.184156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.611 [2024-11-20 06:39:42.184188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.611 [2024-11-20 06:39:42.192040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.611 [2024-11-20 06:39:42.192086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.611 [2024-11-20 06:39:42.192108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.611 [2024-11-20 06:39:42.199714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.611 [2024-11-20 06:39:42.199745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.611 [2024-11-20 06:39:42.199762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.611 [2024-11-20 06:39:42.207412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.611 [2024-11-20 06:39:42.207444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.611 [2024-11-20 06:39:42.207462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.611 [2024-11-20 06:39:42.215165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.611 [2024-11-20 06:39:42.215197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.611 [2024-11-20 06:39:42.215228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.611 [2024-11-20 06:39:42.222587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.611 [2024-11-20 06:39:42.222620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.611 [2024-11-20 06:39:42.222638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.611 [2024-11-20 06:39:42.230225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.611 [2024-11-20 06:39:42.230263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.611 [2024-11-20 06:39:42.230312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.611 [2024-11-20 06:39:42.238273] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.611 [2024-11-20 06:39:42.238313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.611 [2024-11-20 06:39:42.238333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.611 [2024-11-20 06:39:42.243645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.611 [2024-11-20 06:39:42.243677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.611 [2024-11-20 06:39:42.243695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.611 [2024-11-20 06:39:42.249208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.249239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.249257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.255343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.255375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.255392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.262886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.262916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.262948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.270849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.270880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.270897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.278469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.278501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.278518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.286621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.286652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.286669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.294899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.294928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.294944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.300965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.300997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.301014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.306943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.306974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.306992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.312846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.312877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.312901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.317594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.317625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.317642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.322714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.322746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.322763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.327934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.327965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.327997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.333238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.333270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.333287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.336036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.336066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.336083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.340683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.340714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.340731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.345742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.345772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.345805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.350429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.350460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.350478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.355245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.355282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.355300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.359021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.359051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.359069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.361812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.361843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.361860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.366289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.366330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.366349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.370747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.370777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.370794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.373594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.373625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.373641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.376903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.376933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.376950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.381290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.381333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.612 [2024-11-20 06:39:42.381352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.612 [2024-11-20 06:39:42.388276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.612 [2024-11-20 06:39:42.388328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.388346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.613 [2024-11-20 06:39:42.394812] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.613 [2024-11-20 06:39:42.394843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.394860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.613 [2024-11-20 06:39:42.400418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.613 [2024-11-20 06:39:42.400448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.400465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.613 [2024-11-20 06:39:42.405905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.613 [2024-11-20 06:39:42.405936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.405953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.613 [2024-11-20 06:39:42.410376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.613 [2024-11-20 06:39:42.410406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.410423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.613 [2024-11-20 06:39:42.415058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.613 [2024-11-20 06:39:42.415089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.415106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.613 [2024-11-20 06:39:42.420500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.613 [2024-11-20 06:39:42.420531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.420549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.613 [2024-11-20 06:39:42.423310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.613 [2024-11-20 06:39:42.423340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.423356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.613 [2024-11-20 06:39:42.428711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.613 [2024-11-20 06:39:42.428754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.428769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.613 [2024-11-20 06:39:42.436309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.613 [2024-11-20 06:39:42.436355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.436380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.613 [2024-11-20 06:39:42.442689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.613 [2024-11-20 06:39:42.442720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.613 [2024-11-20 06:39:42.442753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.872 [2024-11-20 06:39:42.450656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.872 [2024-11-20 06:39:42.450701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.872 [2024-11-20 06:39:42.450717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.872 [2024-11-20 06:39:42.457780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.872 [2024-11-20 06:39:42.457825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.872 [2024-11-20 06:39:42.457841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.872 [2024-11-20 06:39:42.463724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.872 [2024-11-20 06:39:42.463753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.872 [2024-11-20 06:39:42.463770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.872 [2024-11-20 06:39:42.469472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.872 [2024-11-20 06:39:42.469502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.872 [2024-11-20 06:39:42.469519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.872 [2024-11-20 06:39:42.475050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.872 [2024-11-20 06:39:42.475081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.872 [2024-11-20 06:39:42.475113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.872 [2024-11-20 06:39:42.480185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.480215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.480231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.484650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.484680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.484697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.489123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.489159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.489191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.493720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.493763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.493779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.498299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.498336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.498368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.503056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.503085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.503101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.508034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.508064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.508081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.513340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.513370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.513402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.518595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.518625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.518644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.523740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.523784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.523800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.528877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.528907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.528925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.534875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.534922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.534939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.540586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.540633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.540650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.545888] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.545919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.545935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.551405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.551436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.551453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.556636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.556683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.556700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.561727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.561756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.561787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.566874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.566919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.566935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.572222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.572252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.572269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.577548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.577579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.577618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.582351] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.582382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.582399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.587391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.587422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.587440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.594087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.594117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.594134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.600146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.600178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.600196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.605642] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.605675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.873 [2024-11-20 06:39:42.605692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.873 [2024-11-20 06:39:42.611155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.873 [2024-11-20 06:39:42.611202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.611219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.615579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.615612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.615629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.874 5858.00 IOPS, 732.25 MiB/s [2024-11-20T05:39:42.710Z] [2024-11-20 06:39:42.622536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.622568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.622585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.627705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.627737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.627754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.633251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.633284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.633313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.638489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.638521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.638538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.644610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.644642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.644659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.652222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.652254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.652272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.657762] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.657794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.657811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.663558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.663601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.663618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.668886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.668917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.668950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.673418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.673449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.673472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.677935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.677965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.677981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.682417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.682448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.682465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.687105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.687136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.687152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.692139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.692170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.692187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.697149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.697180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.697198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:10.874 [2024-11-20 06:39:42.703016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:10.874 [2024-11-20 06:39:42.703047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.874 [2024-11-20 06:39:42.703064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.134 [2024-11-20 06:39:42.710704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.134 [2024-11-20 06:39:42.710746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.134 [2024-11-20 06:39:42.710763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.134 [2024-11-20 06:39:42.716806] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.134 [2024-11-20 06:39:42.716837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.134 [2024-11-20 06:39:42.716855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.134 [2024-11-20 06:39:42.722508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.134 [2024-11-20 06:39:42.722547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.134 [2024-11-20 06:39:42.722565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.134 [2024-11-20 06:39:42.727586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.727617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.727635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.732024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.732055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.732072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.736456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.736485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.736502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.740969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.740999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.741016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.745902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.745934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.745952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.750867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.750898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.750915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.755381] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.755410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.755427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.759908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.759937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.759953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.764314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.764344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.764361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.768742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.768772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.768790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.773378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.773409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.773426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.778522] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.778553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.778570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.783300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.783348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.783366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.788657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.788688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.788705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.793932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.793963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.793997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.799488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.799519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.799536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.805617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.805648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.805688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.811199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.811230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.811262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.816595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.816626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.816643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.821863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.821894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.821927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.826487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.826517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.826534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.831095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.831126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.831144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.835536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.835566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.835582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.840026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.840056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.840073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.844463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.844493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.844509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.848933] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.135 [2024-11-20 06:39:42.848963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.135 [2024-11-20 06:39:42.848980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.135 [2024-11-20 06:39:42.853409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.853439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.853455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.858040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.858084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.858101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.862772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.862802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.862835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.867477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.867507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.867538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.872278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.872315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.872334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.876815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.876845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.876862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.881232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.881261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.881277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.885659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.885689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.885712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.890266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.890296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.890321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.894875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.894905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.894922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.899342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.899371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.899388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.903542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.903572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.903588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.906362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.906391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.906407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.910662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.910693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.910710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.915875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.915905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.915921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.921977] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.922008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.922024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.928431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.928485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.928503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.934429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.934460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.934477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.939536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.939567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.939585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.944455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.944486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.944503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.949104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.949133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.136 [2024-11-20 06:39:42.949148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.136 [2024-11-20 06:39:42.953736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.136 [2024-11-20 06:39:42.953780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.137 [2024-11-20 06:39:42.953797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.137 [2024-11-20 06:39:42.958445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.137 [2024-11-20 06:39:42.958475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.137 [2024-11-20 06:39:42.958491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.137 [2024-11-20 06:39:42.963252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.137 [2024-11-20 06:39:42.963298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.137 [2024-11-20 06:39:42.963324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:42.967663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:42.967694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:42.967711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:42.972405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:42.972439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:42.972456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:42.977278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:42.977334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:42.977355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:42.982885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:42.982916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:42.982932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:42.990618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:42.990649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:42.990681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:42.996813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:42.996858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:42.996875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.003025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.003057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.003089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.009133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.009178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.009195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.015063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.015095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.015113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.021204] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.021250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.021288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.027703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.027735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.027766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.032619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.032650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.032683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.037851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.037882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.037900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.043113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.043144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.043161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.048082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.048112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.048129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.053774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.053805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.053822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.058440] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.058470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.058487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.063092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.063123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.063155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.068073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.068110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.068128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.074064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.074095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.074112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.081743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.397 [2024-11-20 06:39:43.081773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.397 [2024-11-20 06:39:43.081790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.397 [2024-11-20 06:39:43.087729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.087760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.087776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.093477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.093508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.093526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.097067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.097097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.097115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.102965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.102995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.103027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.108938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.108967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.108985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.114325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.114373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.114389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.120478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.120510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.120527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.125541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.125572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.125589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.130835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.130883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.130909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.136324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.136355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.136372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.140510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.140541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.140559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.145529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.145560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.145578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.151058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.151090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.151107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.155976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.156007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.156025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.161536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.161566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.161590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.166053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.166083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.166099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.170897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.170928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.170945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.176145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.176175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.176192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.182986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.183016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.183033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.190415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.190447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.190464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.196587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.196618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.196636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.202861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.202892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.202909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.209549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.209581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.209598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.215706] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.215737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.215755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.221184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.221214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.221232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.398 [2024-11-20 06:39:43.226389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.398 [2024-11-20 06:39:43.226419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.398 [2024-11-20 06:39:43.226437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.230945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.230983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.231003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.235471] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.235501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.235517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.239916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.239946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.239962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.244489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.244519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.244535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.248976] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.249005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.249022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.253448] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.253478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.253501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.258111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.258140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.258157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.262668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.262698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.262715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.267152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.267181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.267198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.271559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.271588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.271606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.276188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.276218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.276234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.280857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.280887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.280904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.285428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.285458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.285475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.289935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.289965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.289981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.294287] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.294329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.294347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.298747] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.298791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.298809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.303243] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.303272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.303288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.308039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.308069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.308086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.313051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.313081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.313098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.317896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.317926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.317943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.322290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.322326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.322345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.326849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.659 [2024-11-20 06:39:43.326877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.659 [2024-11-20 06:39:43.326894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.659 [2024-11-20 06:39:43.331457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.331487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.331504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.336092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.336122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.336138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.340637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.340668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.340684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.345338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.345369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.345385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.349838] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.349868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.349885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.354504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.354534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.354552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.359229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.359258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.359275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.364570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.364600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.364617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.369907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.369938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.369955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.374658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.374689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.374712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.379360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.379389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.379407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.383856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.383886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.383903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.389168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.389199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.389216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.396055] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.396086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.396105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.403147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.403179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.403196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.409004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.409045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.409063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.412986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.413016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.413032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.417750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.417782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.417800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.423708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.423759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.423777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.429487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.429534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.429551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.434995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.435026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.435043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.439752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.439782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.439798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.444218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.444262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.444277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.448847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.448876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.448892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.453636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.453665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.453682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.458283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.458322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.458341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.462793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.660 [2024-11-20 06:39:43.462820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.660 [2024-11-20 06:39:43.462836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.660 [2024-11-20 06:39:43.467434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.661 [2024-11-20 06:39:43.467462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.661 [2024-11-20 06:39:43.467478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.661 [2024-11-20 06:39:43.472101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.661 [2024-11-20 06:39:43.472131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.661 [2024-11-20 06:39:43.472162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.661 [2024-11-20 06:39:43.476999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.661 [2024-11-20 06:39:43.477030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.661 [2024-11-20 06:39:43.477063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.661 [2024-11-20 06:39:43.482001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.661 [2024-11-20 06:39:43.482032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.661 [2024-11-20 06:39:43.482065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.661 [2024-11-20 06:39:43.486726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.661 [2024-11-20 06:39:43.486756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.661 [2024-11-20 06:39:43.486773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.661 [2024-11-20 06:39:43.491180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.661 [2024-11-20 06:39:43.491210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.661 [2024-11-20 06:39:43.491227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.920 [2024-11-20 06:39:43.495677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.920 [2024-11-20 06:39:43.495721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.920 [2024-11-20 06:39:43.495738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.920 [2024-11-20 06:39:43.500278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.920 [2024-11-20 06:39:43.500330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.920 [2024-11-20 06:39:43.500349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.920 [2024-11-20 06:39:43.504830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.920 [2024-11-20 06:39:43.504864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.920 [2024-11-20 06:39:43.504881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.920 [2024-11-20 06:39:43.509562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.920 [2024-11-20 06:39:43.509593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.920 [2024-11-20 06:39:43.509625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.920 [2024-11-20 06:39:43.514181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.920 [2024-11-20 06:39:43.514224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.920 [2024-11-20 06:39:43.514240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.920 [2024-11-20 06:39:43.518845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.920 [2024-11-20 06:39:43.518889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.920 [2024-11-20 06:39:43.518906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.920 [2024-11-20 06:39:43.523426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.523457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.523473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.528020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.528050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.528067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.532622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.532652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.532669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.537429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.537458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.537475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.542184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.542228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.542244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.547009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.547039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.547072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.551854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.551884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.551915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.557493] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.557524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.557541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.563794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.563823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.563839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.571247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.571294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.571319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.577626] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.577673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.577691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.583518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.583549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.583567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.589527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.589558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.589575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.595473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.595505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.595534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.601409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.601440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.601458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.608044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.608076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.608093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.615455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.615486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.615503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:11.921 [2024-11-20 06:39:43.624556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x183ddc0) 00:29:11.921 [2024-11-20 06:39:43.624601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:11.921 [2024-11-20 06:39:43.624617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:11.921 5936.50 IOPS, 742.06 MiB/s 00:29:11.921 Latency(us) 00:29:11.921 [2024-11-20T05:39:43.757Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:11.921 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:11.921 nvme0n1 : 2.01 5935.35 741.92 0.00 0.00 2691.12 709.97 9126.49 00:29:11.921 [2024-11-20T05:39:43.757Z] =================================================================================================================== 00:29:11.921 [2024-11-20T05:39:43.757Z] Total : 5935.35 741.92 0.00 0.00 2691.12 709.97 9126.49 00:29:11.921 { 00:29:11.921 "results": [ 00:29:11.921 { 00:29:11.921 "job": "nvme0n1", 00:29:11.921 "core_mask": "0x2", 00:29:11.921 "workload": "randread", 00:29:11.921 "status": "finished", 00:29:11.921 "queue_depth": 16, 00:29:11.921 "io_size": 131072, 00:29:11.921 "runtime": 2.005275, 00:29:11.921 "iops": 5935.345526174714, 00:29:11.921 "mibps": 741.9181907718392, 00:29:11.921 "io_failed": 0, 00:29:11.921 "io_timeout": 0, 00:29:11.921 "avg_latency_us": 2691.1199507085644, 00:29:11.921 "min_latency_us": 709.9733333333334, 00:29:11.921 "max_latency_us": 9126.494814814814 00:29:11.921 } 00:29:11.921 ], 00:29:11.921 "core_count": 1 00:29:11.921 } 00:29:11.921 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:11.921 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:11.921 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:11.921 | .driver_specific 00:29:11.921 | .nvme_error 00:29:11.921 | .status_code 00:29:11.921 | .command_transient_transport_error' 00:29:11.921 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 384 > 0 )) 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2198885 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2198885 ']' 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2198885 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2198885 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2198885' 00:29:12.180 killing process with pid 2198885 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2198885 00:29:12.180 Received shutdown signal, test time was about 2.000000 seconds 00:29:12.180 00:29:12.180 Latency(us) 00:29:12.180 [2024-11-20T05:39:44.016Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:12.180 [2024-11-20T05:39:44.016Z] =================================================================================================================== 00:29:12.180 [2024-11-20T05:39:44.016Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:12.180 06:39:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2198885 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2199302 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2199302 /var/tmp/bperf.sock 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2199302 ']' 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:12.438 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:12.438 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:12.438 [2024-11-20 06:39:44.256117] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:12.438 [2024-11-20 06:39:44.256210] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199302 ] 00:29:12.696 [2024-11-20 06:39:44.332358] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.696 [2024-11-20 06:39:44.395716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.696 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:12.696 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:12.696 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:12.696 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:13.262 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:13.262 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.262 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.262 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.262 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.262 06:39:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:13.829 nvme0n1 00:29:13.829 06:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:13.829 06:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:13.829 06:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:13.829 06:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:13.829 06:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:13.829 06:39:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:13.829 Running I/O for 2 seconds... 00:29:13.829 [2024-11-20 06:39:45.511333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016edf550 00:29:13.829 [2024-11-20 06:39:45.512631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:5668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.512670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.522974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ee95a0 00:29:13.829 [2024-11-20 06:39:45.524167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.524209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.535392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016efa7d8 00:29:13.829 [2024-11-20 06:39:45.536583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25537 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.536613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.549414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016edf988 00:29:13.829 [2024-11-20 06:39:45.551184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:22903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.551235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.557982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016efd208 00:29:13.829 [2024-11-20 06:39:45.558813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:11764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.558843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.570505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ee4578 00:29:13.829 [2024-11-20 06:39:45.571564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:22775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.571593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.582971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016eebb98 00:29:13.829 [2024-11-20 06:39:45.584214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:17745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.584257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.595168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ee0ea0 00:29:13.829 [2024-11-20 06:39:45.596400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:23910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.596444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.609125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016efb480 00:29:13.829 [2024-11-20 06:39:45.610927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:18929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.610969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.617589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef20d8 00:29:13.829 [2024-11-20 06:39:45.618378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.618405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.629430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef1ca0 00:29:13.829 [2024-11-20 06:39:45.630378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:14814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.630406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.643841] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:13.829 [2024-11-20 06:39:45.644076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.644118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:13.829 [2024-11-20 06:39:45.657378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:13.829 [2024-11-20 06:39:45.657614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:13.829 [2024-11-20 06:39:45.657641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.671069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.671282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.671330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.684753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.684962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.685004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.698208] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.698481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.698508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.711815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.712025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.712051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.725381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.725617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.725657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.738896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.739108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:6183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.739149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.752340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.752570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.752611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.765689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.765934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19474 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.765971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.779203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.779470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19639 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.779497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.792884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.793092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.793133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.806455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.806684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.806724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.820059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.820286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11473 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.820333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.833756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.833967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.833994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.847199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.847475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15096 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.847503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.860665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.860887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5947 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.088 [2024-11-20 06:39:45.860912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.088 [2024-11-20 06:39:45.874045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.088 [2024-11-20 06:39:45.874250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.089 [2024-11-20 06:39:45.874292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.089 [2024-11-20 06:39:45.887505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.089 [2024-11-20 06:39:45.887748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17099 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.089 [2024-11-20 06:39:45.887777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.089 [2024-11-20 06:39:45.900943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.089 [2024-11-20 06:39:45.901151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.089 [2024-11-20 06:39:45.901177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.089 [2024-11-20 06:39:45.914372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.089 [2024-11-20 06:39:45.914594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.089 [2024-11-20 06:39:45.914620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:45.927835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:45.928067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:45.928107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:45.941263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:45.941524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:45.941551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:45.954845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:45.955061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:45.955102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:45.968334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:45.968570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:45.968609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:45.981754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:45.981977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:45.982018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:45.995504] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:45.995734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:45.995773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:46.008948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:46.009227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:46.009256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:46.022910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:46.023169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:46.023197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:46.036554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:46.036814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:46.036840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:46.050180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:46.050433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:46.050460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:46.063694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:46.063931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.347 [2024-11-20 06:39:46.063971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.347 [2024-11-20 06:39:46.077397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.347 [2024-11-20 06:39:46.077630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.348 [2024-11-20 06:39:46.077670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.348 [2024-11-20 06:39:46.091077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.348 [2024-11-20 06:39:46.091320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.348 [2024-11-20 06:39:46.091363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.348 [2024-11-20 06:39:46.104797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.348 [2024-11-20 06:39:46.105027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.348 [2024-11-20 06:39:46.105053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.348 [2024-11-20 06:39:46.118316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.348 [2024-11-20 06:39:46.118553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.348 [2024-11-20 06:39:46.118579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.348 [2024-11-20 06:39:46.131793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.348 [2024-11-20 06:39:46.132016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.348 [2024-11-20 06:39:46.132041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.348 [2024-11-20 06:39:46.145273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.348 [2024-11-20 06:39:46.145509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.348 [2024-11-20 06:39:46.145549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.348 [2024-11-20 06:39:46.158814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.348 [2024-11-20 06:39:46.159032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.348 [2024-11-20 06:39:46.159058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.348 [2024-11-20 06:39:46.172361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.348 [2024-11-20 06:39:46.172587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.348 [2024-11-20 06:39:46.172629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.185862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.186081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.186106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.199379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.199612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9033 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.199642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.213026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.213240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.213280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.226544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.226798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.226839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.240053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.240264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.240295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.253566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.253795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.253835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.267068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.267285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.267321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.280675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.280940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.280968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.294231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.294501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17210 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.294529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.307892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.308126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.308151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.321311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.321533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.321574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.334928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.335156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.335181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.348581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.348834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.348860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.606 [2024-11-20 06:39:46.361968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.606 [2024-11-20 06:39:46.362218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.606 [2024-11-20 06:39:46.362245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.607 [2024-11-20 06:39:46.375558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.607 [2024-11-20 06:39:46.375790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18886 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.607 [2024-11-20 06:39:46.375829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.607 [2024-11-20 06:39:46.389012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.607 [2024-11-20 06:39:46.389240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:5031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.607 [2024-11-20 06:39:46.389265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.607 [2024-11-20 06:39:46.402491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.607 [2024-11-20 06:39:46.402739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14770 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.607 [2024-11-20 06:39:46.402765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.607 [2024-11-20 06:39:46.416079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.607 [2024-11-20 06:39:46.416311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.607 [2024-11-20 06:39:46.416339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.607 [2024-11-20 06:39:46.429447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.607 [2024-11-20 06:39:46.429700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.607 [2024-11-20 06:39:46.429742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.865 [2024-11-20 06:39:46.443060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.865 [2024-11-20 06:39:46.443285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.865 [2024-11-20 06:39:46.443340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.865 [2024-11-20 06:39:46.456597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.865 [2024-11-20 06:39:46.456823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.865 [2024-11-20 06:39:46.456863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.865 [2024-11-20 06:39:46.470052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.865 [2024-11-20 06:39:46.470281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.865 [2024-11-20 06:39:46.470312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.865 [2024-11-20 06:39:46.483735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.865 [2024-11-20 06:39:46.483944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.865 [2024-11-20 06:39:46.483983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.865 19048.00 IOPS, 74.41 MiB/s [2024-11-20T05:39:46.701Z] [2024-11-20 06:39:46.497044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.865 [2024-11-20 06:39:46.497558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12601 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.865 [2024-11-20 06:39:46.497600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.865 [2024-11-20 06:39:46.510458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.865 [2024-11-20 06:39:46.510710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.865 [2024-11-20 06:39:46.510735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.865 [2024-11-20 06:39:46.524010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.865 [2024-11-20 06:39:46.524218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.865 [2024-11-20 06:39:46.524242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.865 [2024-11-20 06:39:46.537362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.865 [2024-11-20 06:39:46.537588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.865 [2024-11-20 06:39:46.537630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.865 [2024-11-20 06:39:46.550857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.865 [2024-11-20 06:39:46.551080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.865 [2024-11-20 06:39:46.551104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.865 [2024-11-20 06:39:46.564343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.866 [2024-11-20 06:39:46.564576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-11-20 06:39:46.564602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.866 [2024-11-20 06:39:46.577855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.866 [2024-11-20 06:39:46.578067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24627 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-11-20 06:39:46.578093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.866 [2024-11-20 06:39:46.591351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.866 [2024-11-20 06:39:46.591594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-11-20 06:39:46.591624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.866 [2024-11-20 06:39:46.604787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.866 [2024-11-20 06:39:46.604998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1496 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-11-20 06:39:46.605023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.866 [2024-11-20 06:39:46.618191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.866 [2024-11-20 06:39:46.618448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-11-20 06:39:46.618474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.866 [2024-11-20 06:39:46.631740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.866 [2024-11-20 06:39:46.631943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-11-20 06:39:46.631984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.866 [2024-11-20 06:39:46.645389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.866 [2024-11-20 06:39:46.645616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-11-20 06:39:46.645655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.866 [2024-11-20 06:39:46.658768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.866 [2024-11-20 06:39:46.658975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24042 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-11-20 06:39:46.659001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.866 [2024-11-20 06:39:46.672172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.866 [2024-11-20 06:39:46.672422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-11-20 06:39:46.672448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:14.866 [2024-11-20 06:39:46.685809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:14.866 [2024-11-20 06:39:46.686014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:14.866 [2024-11-20 06:39:46.686055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.124 [2024-11-20 06:39:46.699233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.124 [2024-11-20 06:39:46.699469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.124 [2024-11-20 06:39:46.699497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.124 [2024-11-20 06:39:46.712696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.124 [2024-11-20 06:39:46.712905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:21983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.124 [2024-11-20 06:39:46.712946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.124 [2024-11-20 06:39:46.726200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.124 [2024-11-20 06:39:46.726444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.124 [2024-11-20 06:39:46.726485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.124 [2024-11-20 06:39:46.739704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.124 [2024-11-20 06:39:46.739922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.124 [2024-11-20 06:39:46.739948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.124 [2024-11-20 06:39:46.753065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.124 [2024-11-20 06:39:46.753274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16437 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.124 [2024-11-20 06:39:46.753322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.124 [2024-11-20 06:39:46.766507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.124 [2024-11-20 06:39:46.766740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.124 [2024-11-20 06:39:46.766780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.124 [2024-11-20 06:39:46.780048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.124 [2024-11-20 06:39:46.780279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.780312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.793596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.793875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.793902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.807239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.807499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.807526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.820766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.820976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.821001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.834254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.834514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.834541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.847763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.847974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.848015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.861238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.861494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.861521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.874940] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.875164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.875189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.888531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.888775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.888801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.902124] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.902379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.902405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.915723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.915936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11685 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.915962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.929146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.929397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:12439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.929424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.942528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.942774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.942804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.125 [2024-11-20 06:39:46.956156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.125 [2024-11-20 06:39:46.956387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11335 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.125 [2024-11-20 06:39:46.956415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:46.969789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:46.969999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:46.970025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:46.983388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:46.983638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:46.983664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:46.996938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:46.997150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:46.997189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.010412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.010644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.010670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.023904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.024117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19571 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.024158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.037379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.037625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:15104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.037651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.050988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.051255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25002 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.051281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.064602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.064855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:10792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.064895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.078183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.078441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.078467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.091691] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.091938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.091963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.105077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.105313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:3245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.105343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.118551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.118820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.118847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.132050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.132273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.132321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.145519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.145741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.145770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.158985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.159218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.159244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.172466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.172745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.172777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.186152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.186400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.186428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.199799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.200028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:17347 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.200056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.384 [2024-11-20 06:39:47.213432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.384 [2024-11-20 06:39:47.213660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.384 [2024-11-20 06:39:47.213688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.643 [2024-11-20 06:39:47.227181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.643 [2024-11-20 06:39:47.227430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.643 [2024-11-20 06:39:47.227460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.643 [2024-11-20 06:39:47.240902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.643 [2024-11-20 06:39:47.241126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.643 [2024-11-20 06:39:47.241154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.643 [2024-11-20 06:39:47.254797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.643 [2024-11-20 06:39:47.255028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:4025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.643 [2024-11-20 06:39:47.255054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.643 [2024-11-20 06:39:47.268616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.643 [2024-11-20 06:39:47.268829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:2680 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.643 [2024-11-20 06:39:47.268872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.643 [2024-11-20 06:39:47.282661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.643 [2024-11-20 06:39:47.282890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22077 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.643 [2024-11-20 06:39:47.282916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.643 [2024-11-20 06:39:47.296647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.643 [2024-11-20 06:39:47.296874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.643 [2024-11-20 06:39:47.296908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.643 [2024-11-20 06:39:47.310360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.643 [2024-11-20 06:39:47.310587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:23943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.643 [2024-11-20 06:39:47.310615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.643 [2024-11-20 06:39:47.323775] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.323987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7921 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.324028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.337603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.337821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.337861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.351574] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.351808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:25156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.351848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.365301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.365553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:7104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.365580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.379144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.379390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.379418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.393012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.393268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:18345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.393294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.406908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.407124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16831 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.407165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.420741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.420978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:22570 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.421005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.434493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.434743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24325 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.434770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.448329] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.448570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:14470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.448597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.462312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.462533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:9287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.462561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.644 [2024-11-20 06:39:47.475991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.644 [2024-11-20 06:39:47.476210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.644 [2024-11-20 06:39:47.476252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.902 [2024-11-20 06:39:47.489840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52dd50) with pdu=0x200016ef2d80 00:29:15.902 [2024-11-20 06:39:47.490071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:15.902 [2024-11-20 06:39:47.490097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:15.902 18919.00 IOPS, 73.90 MiB/s 00:29:15.902 Latency(us) 00:29:15.902 [2024-11-20T05:39:47.738Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:15.902 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:15.902 nvme0n1 : 2.01 18922.99 73.92 0.00 0.00 6749.94 2730.67 15728.64 00:29:15.902 [2024-11-20T05:39:47.738Z] =================================================================================================================== 00:29:15.902 [2024-11-20T05:39:47.738Z] Total : 18922.99 73.92 0.00 0.00 6749.94 2730.67 15728.64 00:29:15.902 { 00:29:15.902 "results": [ 00:29:15.902 { 00:29:15.902 "job": "nvme0n1", 00:29:15.902 "core_mask": "0x2", 00:29:15.902 "workload": "randwrite", 00:29:15.902 "status": "finished", 00:29:15.902 "queue_depth": 128, 00:29:15.902 "io_size": 4096, 00:29:15.902 "runtime": 2.006343, 00:29:15.902 "iops": 18922.98575069168, 00:29:15.902 "mibps": 73.91791308863938, 00:29:15.902 "io_failed": 0, 00:29:15.902 "io_timeout": 0, 00:29:15.902 "avg_latency_us": 6749.942616415078, 00:29:15.902 "min_latency_us": 2730.6666666666665, 00:29:15.902 "max_latency_us": 15728.64 00:29:15.902 } 00:29:15.902 ], 00:29:15.902 "core_count": 1 00:29:15.902 } 00:29:15.902 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:15.902 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:15.902 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:15.902 | .driver_specific 00:29:15.902 | .nvme_error 00:29:15.902 | .status_code 00:29:15.902 | .command_transient_transport_error' 00:29:15.902 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 148 > 0 )) 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2199302 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2199302 ']' 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2199302 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2199302 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2199302' 00:29:16.160 killing process with pid 2199302 00:29:16.160 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2199302 00:29:16.160 Received shutdown signal, test time was about 2.000000 seconds 00:29:16.160 00:29:16.160 Latency(us) 00:29:16.160 [2024-11-20T05:39:47.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:16.160 [2024-11-20T05:39:47.996Z] =================================================================================================================== 00:29:16.161 [2024-11-20T05:39:47.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:16.161 06:39:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2199302 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2199716 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2199716 /var/tmp/bperf.sock 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # '[' -z 2199716 ']' 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:16.419 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:16.419 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.419 [2024-11-20 06:39:48.113319] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:16.419 [2024-11-20 06:39:48.113406] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2199716 ] 00:29:16.419 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:16.419 Zero copy mechanism will not be used. 00:29:16.419 [2024-11-20 06:39:48.179593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:16.419 [2024-11-20 06:39:48.236420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:16.678 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:16.678 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@866 -- # return 0 00:29:16.678 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.678 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:16.936 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:16.936 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:16.936 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:16.936 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:16.936 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:16.936 06:39:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.194 nvme0n1 00:29:17.194 06:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:17.194 06:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:17.194 06:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:17.194 06:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:17.194 06:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:17.194 06:39:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:17.452 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:17.452 Zero copy mechanism will not be used. 00:29:17.452 Running I/O for 2 seconds... 00:29:17.452 [2024-11-20 06:39:49.149462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.149554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.149593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.155412] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.155494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.155537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.160675] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.160760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.160790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.165975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.166063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.166093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.171231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.171330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.171369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.176456] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.176535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.176565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.181439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.181526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.181556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.186493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.186592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.186629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.191603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.191682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.191711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.196671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.196747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.196774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.201623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.201693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.201729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.207197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.207267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.207294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.212492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.212565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.212592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.217461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.217540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.217568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.222453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.222535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.222564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.227845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.227914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.227941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.233908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.233980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.234013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.239432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.239515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.239544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.245093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.245208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.245237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.452 [2024-11-20 06:39:49.251497] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.452 [2024-11-20 06:39:49.251600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.452 [2024-11-20 06:39:49.251629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.453 [2024-11-20 06:39:49.258037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.453 [2024-11-20 06:39:49.258132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.453 [2024-11-20 06:39:49.258161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.453 [2024-11-20 06:39:49.265009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.453 [2024-11-20 06:39:49.265159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.453 [2024-11-20 06:39:49.265189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.453 [2024-11-20 06:39:49.271375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.453 [2024-11-20 06:39:49.271527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.453 [2024-11-20 06:39:49.271556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.453 [2024-11-20 06:39:49.277680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.453 [2024-11-20 06:39:49.277855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.453 [2024-11-20 06:39:49.277883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.453 [2024-11-20 06:39:49.284572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.453 [2024-11-20 06:39:49.284696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.453 [2024-11-20 06:39:49.284725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.291387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.291537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.291567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.298280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.298492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.298521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.305239] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.305337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.305366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.311071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.311146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.311174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.315973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.316070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.316099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.320812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.320896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.320925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.325654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.325738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.325765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.330569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.330647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.330675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.335546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.335628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.335657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.340535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.340623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.340652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.345512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.345582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.345613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.350393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.350466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.350500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.355290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.355389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.355417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.360168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.360252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.360280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.365122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.365191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.365218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.370166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.370254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.370282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.375471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.375568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.712 [2024-11-20 06:39:49.375597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.712 [2024-11-20 06:39:49.380853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.712 [2024-11-20 06:39:49.380929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.380960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.386027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.386163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.386192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.391429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.391560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.391588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.396893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.397048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.397077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.403161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.403351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.403388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.408525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.408670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.408698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.413475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.413566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.413595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.418409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.418503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.418532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.423323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.423421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.423449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.428313] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.428409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.428437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.433374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.433474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.433504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.439642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.439831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.439860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.445432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.445534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.445565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.452371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.452473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.452502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.458897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.459066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.459094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.465849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.466017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.466046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.473257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.473382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.473412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.480701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.480819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.480847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.487140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.487214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.487242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.492608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.492715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.492744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.498470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.498550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.498586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.503863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.503941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.503969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.508792] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.508883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.508911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.513752] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.513823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.513850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.518653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.518737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.518765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.713 [2024-11-20 06:39:49.523601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.713 [2024-11-20 06:39:49.523674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.713 [2024-11-20 06:39:49.523700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:17.714 [2024-11-20 06:39:49.528706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.714 [2024-11-20 06:39:49.528805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.714 [2024-11-20 06:39:49.528832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:17.714 [2024-11-20 06:39:49.533773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.714 [2024-11-20 06:39:49.533858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.714 [2024-11-20 06:39:49.533886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:17.714 [2024-11-20 06:39:49.538772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.714 [2024-11-20 06:39:49.538849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.714 [2024-11-20 06:39:49.538877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:17.714 [2024-11-20 06:39:49.543702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:17.714 [2024-11-20 06:39:49.543784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:17.714 [2024-11-20 06:39:49.543812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.549757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.549862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.549890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.556728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.556895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.556923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.562433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.562547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.562577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.567891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.568030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.568058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.573666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.573809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.573837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.579250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.579367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.579396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.584865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.584999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.585027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.590417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.590544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.590572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.595628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.595733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.595761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.601462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.601595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.601625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.607778] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.607938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.607967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.613105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.613205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.613233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.618085] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.618246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.618274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.622941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.623044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.623072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.628065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.628237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.628265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.634634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.634747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.035 [2024-11-20 06:39:49.634776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.035 [2024-11-20 06:39:49.641268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.035 [2024-11-20 06:39:49.641354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.641389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.646890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.646982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.647010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.652512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.652617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.652646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.657483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.657557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.657584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.662583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.662679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.662708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.668444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.668623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.668652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.674860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.675074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.675103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.681638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.681785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.681814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.687708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.687894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.687923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.694669] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.694770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.694799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.701405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.701519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.701548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.708471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.708605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.708632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.715594] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.715716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.715744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.722980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.723102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.723131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.729964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.730036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.730063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.736042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.736116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.736145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.742177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.742265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.742294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.748595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.748708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.748737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.755879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.755994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.756022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.762627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.762716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.762744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.768868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.768972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.769001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.775639] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.775761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.775789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.782909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.783007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.783036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.788953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.789037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.789066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.794225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.794331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.794360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.799612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.036 [2024-11-20 06:39:49.799753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.036 [2024-11-20 06:39:49.799780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.036 [2024-11-20 06:39:49.805717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.805877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.805912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.812139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.812298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.812345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.818468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.818617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.818645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.825479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.825595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.825625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.831378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.831452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.831484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.836608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.836744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.836773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.842010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.842153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.842182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.847136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.847233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.847262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.852865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.853076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.853104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.859174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.859338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.859367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.865532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.865700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.865728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.872605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.872691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.872720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.879853] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.879956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.879984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.886002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.886076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.886104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.891108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.891184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.891212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.896036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.896108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.896134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.901055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.901139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.901168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.906114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.906191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.906222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.911128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.911216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.911242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.916046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.916133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.916161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.921190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.921268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.921296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.926271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.926356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.926388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.931405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.931483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.931511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.936435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.936516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.936544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.941358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.941445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.941473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.946338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.946412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.946438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.951402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.951473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.951507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.956243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.956336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.956368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.961216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.961293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.961329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.966199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.966292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.966328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.971175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.971258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.971287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.976143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.976222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.976251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.295 [2024-11-20 06:39:49.981072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.295 [2024-11-20 06:39:49.981152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.295 [2024-11-20 06:39:49.981183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:49.985951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:49.986031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:49.986060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:49.991653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:49.991729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:49.991756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:49.997167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:49.997270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:49.997299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.003385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.003482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.003517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.009055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.009180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.009211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.014721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.014912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.014941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.021143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.021279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.021314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.027445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.027651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.027680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.034204] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.034411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.034440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.041774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.041863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.041895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.047933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.048010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.048040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.053481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.053568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.053598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.058568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.058658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.058687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.064170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.064247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.064275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.070382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.070491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.070520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.078023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.078140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.078169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.084230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.084401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.084430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.090542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.090725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.090754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.096762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.096905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.096934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.103043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.103233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.103271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.109269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.109445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.109474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.115505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.115681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.115710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.121701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.121880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.121908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.296 [2024-11-20 06:39:50.127931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.296 [2024-11-20 06:39:50.128119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.296 [2024-11-20 06:39:50.128148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.134149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.134342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.134372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.140357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.140536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.140564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.554 5345.00 IOPS, 668.12 MiB/s [2024-11-20T05:39:50.390Z] [2024-11-20 06:39:50.147932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.148108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.148136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.154105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.154281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.154315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.160219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.160397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.160426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.166142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.166261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.166290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.171019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.171103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.171132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.176238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.176393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.176422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.181527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.181596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.181624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.186901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.186988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.187016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.192425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.192561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.192589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.198131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.198267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.198295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.203406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.203544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.203572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.208783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.208877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.208905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.214216] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.214310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.214338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.219609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.219750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.219779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.224980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.225124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.225152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.230522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.230658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.230686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.235689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.235824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.235852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.240686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.240777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.240805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.246033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.246202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.246231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.252754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.252854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.252889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.259556] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.259670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.259698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.266226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.266420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.266449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.271874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.271947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.271979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.276909] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.276987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.277018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.281771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.281848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.281874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.287153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.554 [2024-11-20 06:39:50.287226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.554 [2024-11-20 06:39:50.287257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.554 [2024-11-20 06:39:50.292877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.292960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.292988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.298323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.298399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.298428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.303257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.303337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.303373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.308297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.308381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.308412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.313349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.313431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.313459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.318462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.318540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.318567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.323479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.323552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.323582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.328436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.328515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.328543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.333454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.333534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.333562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.338612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.338701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.338729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.343741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.343809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.343835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.348674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.348748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.348776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.353607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.353687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.353715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.358492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.358564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.358596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.363403] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.363501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.363529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.368312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.368407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.368435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.373224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.373323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.373351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.378070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.378147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.378174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.555 [2024-11-20 06:39:50.382961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.555 [2024-11-20 06:39:50.383038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.555 [2024-11-20 06:39:50.383066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.812 [2024-11-20 06:39:50.388475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.812 [2024-11-20 06:39:50.388551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.812 [2024-11-20 06:39:50.388578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.812 [2024-11-20 06:39:50.393620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.812 [2024-11-20 06:39:50.393702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.812 [2024-11-20 06:39:50.393730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.812 [2024-11-20 06:39:50.399070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.812 [2024-11-20 06:39:50.399147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.812 [2024-11-20 06:39:50.399176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.812 [2024-11-20 06:39:50.404531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.404629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.404656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.409506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.409587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.409618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.414323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.414404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.414432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.419650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.419738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.419764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.424616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.424699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.424728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.429592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.429679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.429707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.434548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.434629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.434662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.439419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.439491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.439523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.444428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.444500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.444533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.449395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.449477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.449506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.454887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.454966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.454994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.460369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.460443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.460471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.465362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.465435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.465465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.470431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.470515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.470544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.475747] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.475857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.475885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.481985] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.482180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.482208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.488327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.488479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.488508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.495310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.495421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.495449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.502443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.502594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.502622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.509683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.509779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.509808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.517074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.517176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.517204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.524439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.524650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.524678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.531829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.531941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.531970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.538843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.539052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.539080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.546037] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.546257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.546285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.553074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.553181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.553210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.560168] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.560269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.560297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.567355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.567460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.567488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.574348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.574464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.574492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.580163] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.580234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.580262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.584947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.585026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.585055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.589815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.589907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.589935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.594784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.594855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.594888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.599756] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.599830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.599860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.604696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.604771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.604802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.609668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.609746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.609774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.614704] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.614774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.614801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.619576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.619652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.619680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.624552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.624635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.624663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.629857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.630026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.630055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.636220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.636385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.636414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:18.813 [2024-11-20 06:39:50.642570] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:18.813 [2024-11-20 06:39:50.642729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:18.813 [2024-11-20 06:39:50.642757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.649546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.649662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.649690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.656401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.656513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.656542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.663129] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.663439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.663469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.669928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.670188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.670217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.676626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.676916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.676945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.682901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.683287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.683325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.689654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.689979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.690008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.696296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.696613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.696642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.703043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.703414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.703443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.709875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.710238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.710267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.071 [2024-11-20 06:39:50.716622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.071 [2024-11-20 06:39:50.716946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.071 [2024-11-20 06:39:50.716975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.723250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.723588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.723618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.729247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.729551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.729579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.734338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.734643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.734672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.738948] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.739240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.739268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.743382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.743635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.743662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.747626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.747838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.747872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.751798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.752032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.752059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.756046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.756262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.756289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.760633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.760849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.760877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.765353] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.765551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.765580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.769938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.770154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.770181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.774581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.774844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.774872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.779773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.779984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.780011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.784013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.784222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.784250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.788857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.789101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.789130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.794020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.794324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.794352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.799666] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.799882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.799911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.805358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.805565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.805593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.811171] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.811488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.811517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.817561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.817845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.817874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.823734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.824063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.824091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.830010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.830284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.830319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.836201] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.836436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.836465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.842370] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.842641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.842670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.848611] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.848882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.848909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.854846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.855124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.855152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.072 [2024-11-20 06:39:50.860634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.072 [2024-11-20 06:39:50.860855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.072 [2024-11-20 06:39:50.860884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.073 [2024-11-20 06:39:50.865234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.073 [2024-11-20 06:39:50.865443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.073 [2024-11-20 06:39:50.865471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.073 [2024-11-20 06:39:50.869537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.073 [2024-11-20 06:39:50.869740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.073 [2024-11-20 06:39:50.869767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.073 [2024-11-20 06:39:50.873763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.073 [2024-11-20 06:39:50.873963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.073 [2024-11-20 06:39:50.873991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.073 [2024-11-20 06:39:50.878247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.073 [2024-11-20 06:39:50.878513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.073 [2024-11-20 06:39:50.878541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.073 [2024-11-20 06:39:50.883328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.073 [2024-11-20 06:39:50.883580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.073 [2024-11-20 06:39:50.883613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.073 [2024-11-20 06:39:50.888357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.073 [2024-11-20 06:39:50.888527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.073 [2024-11-20 06:39:50.888555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.073 [2024-11-20 06:39:50.894578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.073 [2024-11-20 06:39:50.894792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.073 [2024-11-20 06:39:50.894820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.073 [2024-11-20 06:39:50.899071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.073 [2024-11-20 06:39:50.899281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.073 [2024-11-20 06:39:50.899316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.073 [2024-11-20 06:39:50.903658] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.073 [2024-11-20 06:39:50.903953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.073 [2024-11-20 06:39:50.903981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.908295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.908531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.908558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.912784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.913004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.913031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.917451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.917678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.917706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.921935] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.922170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.922198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.926512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.926734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.926762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.930782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.930981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.931008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.934908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.935132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.935160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.939252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.939500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.939528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.943572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.943809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.943837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.948214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.948460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.948489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.953011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.953227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.953254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.957837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.958065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.958092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.962688] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.962935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.962962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.967568] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.331 [2024-11-20 06:39:50.967799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.331 [2024-11-20 06:39:50.967827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.331 [2024-11-20 06:39:50.972421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:50.972633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:50.972660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:50.976663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:50.976871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:50.976899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:50.980862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:50.981076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:50.981104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:50.985222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:50.985440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:50.985468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:50.989509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:50.989722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:50.989750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:50.993735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:50.993944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:50.993971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:50.997958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:50.998185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:50.998213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.002188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.002396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.002429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.006328] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.006523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.006551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.010537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.010733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.010761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.014694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.014901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.014928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.018933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.019153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.019180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.023173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.023399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.023428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.027448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.027675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.027703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.031693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.031892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.031920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.035976] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.036185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.036212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.040148] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.040368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.040412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.044406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.044602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.044631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.048637] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.048852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.048879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.052857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.053077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.053104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.056999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.057199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.057226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.061391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.061590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.061619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.065551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.065763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.065790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.069690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.069901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.069929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.073884] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.074091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.074118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.078087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.078333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.078361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.082289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.332 [2024-11-20 06:39:51.082500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.332 [2024-11-20 06:39:51.082529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.332 [2024-11-20 06:39:51.086461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.086687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.086714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.090652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.090882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.090910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.094815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.095018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.095045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.098950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.099165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.099192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.103087] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.103291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.103326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.107200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.107415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.107443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.111323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.111550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.111577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.115519] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.115725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.115753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.119710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.119933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.119961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.123906] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.124114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.124141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.128122] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.128339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.128367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.132673] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.132865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.132893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.137984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.138313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.138341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:19.333 [2024-11-20 06:39:51.143175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x52e090) with pdu=0x200016eff3c8 00:29:19.333 [2024-11-20 06:39:51.143493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:19.333 [2024-11-20 06:39:51.143521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:19.333 5647.00 IOPS, 705.88 MiB/s 00:29:19.333 Latency(us) 00:29:19.333 [2024-11-20T05:39:51.169Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.333 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:19.333 nvme0n1 : 2.00 5643.04 705.38 0.00 0.00 2827.84 1868.99 12524.66 00:29:19.333 [2024-11-20T05:39:51.169Z] =================================================================================================================== 00:29:19.333 [2024-11-20T05:39:51.169Z] Total : 5643.04 705.38 0.00 0.00 2827.84 1868.99 12524.66 00:29:19.333 { 00:29:19.333 "results": [ 00:29:19.333 { 00:29:19.333 "job": "nvme0n1", 00:29:19.333 "core_mask": "0x2", 00:29:19.333 "workload": "randwrite", 00:29:19.333 "status": "finished", 00:29:19.333 "queue_depth": 16, 00:29:19.333 "io_size": 131072, 00:29:19.333 "runtime": 2.004771, 00:29:19.333 "iops": 5643.038531582909, 00:29:19.333 "mibps": 705.3798164478636, 00:29:19.333 "io_failed": 0, 00:29:19.333 "io_timeout": 0, 00:29:19.333 "avg_latency_us": 2827.840177049674, 00:29:19.333 "min_latency_us": 1868.9896296296297, 00:29:19.333 "max_latency_us": 12524.657777777778 00:29:19.333 } 00:29:19.333 ], 00:29:19.333 "core_count": 1 00:29:19.333 } 00:29:19.591 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:19.591 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:19.591 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:19.591 | .driver_specific 00:29:19.591 | .nvme_error 00:29:19.591 | .status_code 00:29:19.591 | .command_transient_transport_error' 00:29:19.591 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 365 > 0 )) 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2199716 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2199716 ']' 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2199716 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2199716 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2199716' 00:29:19.849 killing process with pid 2199716 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2199716 00:29:19.849 Received shutdown signal, test time was about 2.000000 seconds 00:29:19.849 00:29:19.849 Latency(us) 00:29:19.849 [2024-11-20T05:39:51.685Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:19.849 [2024-11-20T05:39:51.685Z] =================================================================================================================== 00:29:19.849 [2024-11-20T05:39:51.685Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:19.849 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2199716 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2198325 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # '[' -z 2198325 ']' 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # kill -0 2198325 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # uname 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2198325 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2198325' 00:29:20.107 killing process with pid 2198325 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@971 -- # kill 2198325 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@976 -- # wait 2198325 00:29:20.107 00:29:20.107 real 0m15.686s 00:29:20.107 user 0m31.708s 00:29:20.107 sys 0m4.242s 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:20.107 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:20.107 ************************************ 00:29:20.107 END TEST nvmf_digest_error 00:29:20.107 ************************************ 00:29:20.367 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:20.367 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:20.367 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:20.367 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:29:20.367 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:20.367 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:29:20.367 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:20.367 06:39:51 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:20.367 rmmod nvme_tcp 00:29:20.367 rmmod nvme_fabrics 00:29:20.367 rmmod nvme_keyring 00:29:20.367 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:20.367 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:29:20.367 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:29:20.367 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 2198325 ']' 00:29:20.367 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 2198325 00:29:20.367 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # '[' -z 2198325 ']' 00:29:20.367 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@956 -- # kill -0 2198325 00:29:20.368 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2198325) - No such process 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@979 -- # echo 'Process with pid 2198325 is not found' 00:29:20.368 Process with pid 2198325 is not found 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:20.368 06:39:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.273 06:39:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:22.273 00:29:22.273 real 0m36.194s 00:29:22.273 user 1m4.384s 00:29:22.273 sys 0m10.278s 00:29:22.273 06:39:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:22.273 06:39:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:22.273 ************************************ 00:29:22.273 END TEST nvmf_digest 00:29:22.273 ************************************ 00:29:22.273 06:39:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:22.273 06:39:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:22.273 06:39:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:22.273 06:39:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:22.273 06:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:22.273 06:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:22.273 06:39:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:22.532 ************************************ 00:29:22.532 START TEST nvmf_bdevperf 00:29:22.532 ************************************ 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:22.532 * Looking for test storage... 00:29:22.532 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lcov --version 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:22.532 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:22.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.533 --rc genhtml_branch_coverage=1 00:29:22.533 --rc genhtml_function_coverage=1 00:29:22.533 --rc genhtml_legend=1 00:29:22.533 --rc geninfo_all_blocks=1 00:29:22.533 --rc geninfo_unexecuted_blocks=1 00:29:22.533 00:29:22.533 ' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:22.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.533 --rc genhtml_branch_coverage=1 00:29:22.533 --rc genhtml_function_coverage=1 00:29:22.533 --rc genhtml_legend=1 00:29:22.533 --rc geninfo_all_blocks=1 00:29:22.533 --rc geninfo_unexecuted_blocks=1 00:29:22.533 00:29:22.533 ' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:22.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.533 --rc genhtml_branch_coverage=1 00:29:22.533 --rc genhtml_function_coverage=1 00:29:22.533 --rc genhtml_legend=1 00:29:22.533 --rc geninfo_all_blocks=1 00:29:22.533 --rc geninfo_unexecuted_blocks=1 00:29:22.533 00:29:22.533 ' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:22.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:22.533 --rc genhtml_branch_coverage=1 00:29:22.533 --rc genhtml_function_coverage=1 00:29:22.533 --rc genhtml_legend=1 00:29:22.533 --rc geninfo_all_blocks=1 00:29:22.533 --rc geninfo_unexecuted_blocks=1 00:29:22.533 00:29:22.533 ' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:22.533 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:22.533 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:29:22.534 06:39:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:25.067 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:25.067 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.067 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:25.068 Found net devices under 0000:09:00.0: cvl_0_0 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:25.068 Found net devices under 0000:09:00.1: cvl_0_1 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:25.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:25.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:29:25.068 00:29:25.068 --- 10.0.0.2 ping statistics --- 00:29:25.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.068 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:25.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:25.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:29:25.068 00:29:25.068 --- 10.0.0.1 ping statistics --- 00:29:25.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:25.068 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2202190 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2202190 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2202190 ']' 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:25.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.068 [2024-11-20 06:39:56.614130] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:25.068 [2024-11-20 06:39:56.614224] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:25.068 [2024-11-20 06:39:56.686802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:25.068 [2024-11-20 06:39:56.746746] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:25.068 [2024-11-20 06:39:56.746794] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:25.068 [2024-11-20 06:39:56.746826] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:25.068 [2024-11-20 06:39:56.746838] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:25.068 [2024-11-20 06:39:56.746847] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:25.068 [2024-11-20 06:39:56.748289] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:25.068 [2024-11-20 06:39:56.748422] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:25.068 [2024-11-20 06:39:56.748426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.068 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.325 [2024-11-20 06:39:56.902526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.325 Malloc0 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.325 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:25.326 [2024-11-20 06:39:56.965456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:25.326 { 00:29:25.326 "params": { 00:29:25.326 "name": "Nvme$subsystem", 00:29:25.326 "trtype": "$TEST_TRANSPORT", 00:29:25.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:25.326 "adrfam": "ipv4", 00:29:25.326 "trsvcid": "$NVMF_PORT", 00:29:25.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:25.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:25.326 "hdgst": ${hdgst:-false}, 00:29:25.326 "ddgst": ${ddgst:-false} 00:29:25.326 }, 00:29:25.326 "method": "bdev_nvme_attach_controller" 00:29:25.326 } 00:29:25.326 EOF 00:29:25.326 )") 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:25.326 06:39:56 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:25.326 "params": { 00:29:25.326 "name": "Nvme1", 00:29:25.326 "trtype": "tcp", 00:29:25.326 "traddr": "10.0.0.2", 00:29:25.326 "adrfam": "ipv4", 00:29:25.326 "trsvcid": "4420", 00:29:25.326 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:25.326 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:25.326 "hdgst": false, 00:29:25.326 "ddgst": false 00:29:25.326 }, 00:29:25.326 "method": "bdev_nvme_attach_controller" 00:29:25.326 }' 00:29:25.326 [2024-11-20 06:39:57.014505] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:25.326 [2024-11-20 06:39:57.014600] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2202230 ] 00:29:25.326 [2024-11-20 06:39:57.084812] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:25.326 [2024-11-20 06:39:57.146477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:25.891 Running I/O for 1 seconds... 00:29:26.825 8522.00 IOPS, 33.29 MiB/s 00:29:26.825 Latency(us) 00:29:26.825 [2024-11-20T05:39:58.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:26.825 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:26.825 Verification LBA range: start 0x0 length 0x4000 00:29:26.825 Nvme1n1 : 1.01 8542.24 33.37 0.00 0.00 14925.35 3082.62 18544.26 00:29:26.825 [2024-11-20T05:39:58.662Z] =================================================================================================================== 00:29:26.826 [2024-11-20T05:39:58.662Z] Total : 8542.24 33.37 0.00 0.00 14925.35 3082.62 18544.26 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2202476 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:27.083 { 00:29:27.083 "params": { 00:29:27.083 "name": "Nvme$subsystem", 00:29:27.083 "trtype": "$TEST_TRANSPORT", 00:29:27.083 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:27.083 "adrfam": "ipv4", 00:29:27.083 "trsvcid": "$NVMF_PORT", 00:29:27.083 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:27.083 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:27.083 "hdgst": ${hdgst:-false}, 00:29:27.083 "ddgst": ${ddgst:-false} 00:29:27.083 }, 00:29:27.083 "method": "bdev_nvme_attach_controller" 00:29:27.083 } 00:29:27.083 EOF 00:29:27.083 )") 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:29:27.083 06:39:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:27.083 "params": { 00:29:27.083 "name": "Nvme1", 00:29:27.083 "trtype": "tcp", 00:29:27.083 "traddr": "10.0.0.2", 00:29:27.083 "adrfam": "ipv4", 00:29:27.083 "trsvcid": "4420", 00:29:27.083 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:27.083 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:27.083 "hdgst": false, 00:29:27.083 "ddgst": false 00:29:27.083 }, 00:29:27.083 "method": "bdev_nvme_attach_controller" 00:29:27.083 }' 00:29:27.083 [2024-11-20 06:39:58.743312] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:27.083 [2024-11-20 06:39:58.743404] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2202476 ] 00:29:27.083 [2024-11-20 06:39:58.811791] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.083 [2024-11-20 06:39:58.869780] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:27.648 Running I/O for 15 seconds... 00:29:29.516 8520.00 IOPS, 33.28 MiB/s [2024-11-20T05:40:01.922Z] 8644.00 IOPS, 33.77 MiB/s [2024-11-20T05:40:01.922Z] 06:40:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2202190 00:29:30.086 06:40:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:30.086 [2024-11-20 06:40:01.705393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:39264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:39272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:39280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:39288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:39296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:39304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:39312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:39320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:39328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:39336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:39344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:39352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:39368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:39376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:39384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.705977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:39392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.705992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:39400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:39408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:39416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:39424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:39432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:39440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:39448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:39456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:39464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:39472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:39480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:39488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:39496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:39504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:39512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:39520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.086 [2024-11-20 06:40:01.706515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:39528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.086 [2024-11-20 06:40:01.706529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:39544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:39552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:39560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:39568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:39576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:39584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:39592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:39600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:39608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:39616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:39624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:39632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:39640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:39648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.706982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.706995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:39656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:39664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:39672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:39680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:39688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:39696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:39704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:39712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:39720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:39728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:39736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:39752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:39760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:39768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:39776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:39784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:39792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:39800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:39808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:39816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:39824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:40280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:30.087 [2024-11-20 06:40:01.707652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:39832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.087 [2024-11-20 06:40:01.707692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.087 [2024-11-20 06:40:01.707706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:39840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:39848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:39856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:39864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:39872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:39880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:39888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:39896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:39904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:39912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:39920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.707986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:39928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.707998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:39936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:39944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:39952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:39960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:39968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:39976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:39984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:39992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:40000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:40008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:40016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:40024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:40032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:40040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:40048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:40056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:40064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:40072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:40080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:40088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:40096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:40104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:40112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:40128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:40136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:40144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.088 [2024-11-20 06:40:01.708796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:40152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.088 [2024-11-20 06:40:01.708808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.708821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:40160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.708834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.708847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:40168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.708859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.708872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:40176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.708884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.708897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:40184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.708910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.708923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:40192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.708935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.708952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:40200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.708965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.708978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:40208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.708990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:40216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.709016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:40224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.709041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:40232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.709066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:40240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.709091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:40248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.709117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:40256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.709142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:40264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:30.089 [2024-11-20 06:40:01.709168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe018d0 is same with the state(6) to be set 00:29:30.089 [2024-11-20 06:40:01.709194] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:30.089 [2024-11-20 06:40:01.709204] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:30.089 [2024-11-20 06:40:01.709214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:40272 len:8 PRP1 0x0 PRP2 0x0 00:29:30.089 [2024-11-20 06:40:01.709226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709382] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.089 [2024-11-20 06:40:01.709405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.089 [2024-11-20 06:40:01.709446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.089 [2024-11-20 06:40:01.709475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709488] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:30.089 [2024-11-20 06:40:01.709502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:30.089 [2024-11-20 06:40:01.709514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.089 [2024-11-20 06:40:01.712888] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.089 [2024-11-20 06:40:01.712922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.089 [2024-11-20 06:40:01.713496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 06:40:01.713528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.089 [2024-11-20 06:40:01.713545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.089 [2024-11-20 06:40:01.713797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.089 [2024-11-20 06:40:01.713991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.089 [2024-11-20 06:40:01.714010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.089 [2024-11-20 06:40:01.714025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.089 [2024-11-20 06:40:01.714041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.089 [2024-11-20 06:40:01.726330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.089 [2024-11-20 06:40:01.726710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 06:40:01.726754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.089 [2024-11-20 06:40:01.726770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.089 [2024-11-20 06:40:01.727003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.089 [2024-11-20 06:40:01.727197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.089 [2024-11-20 06:40:01.727215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.089 [2024-11-20 06:40:01.727227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.089 [2024-11-20 06:40:01.727239] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.089 [2024-11-20 06:40:01.739474] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.089 [2024-11-20 06:40:01.739841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 06:40:01.739869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.089 [2024-11-20 06:40:01.739884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.089 [2024-11-20 06:40:01.740127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.089 [2024-11-20 06:40:01.740346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.089 [2024-11-20 06:40:01.740366] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.089 [2024-11-20 06:40:01.740378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.089 [2024-11-20 06:40:01.740390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.089 [2024-11-20 06:40:01.752456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.089 [2024-11-20 06:40:01.752800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 06:40:01.752828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.089 [2024-11-20 06:40:01.752843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.089 [2024-11-20 06:40:01.753065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.089 [2024-11-20 06:40:01.753273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.089 [2024-11-20 06:40:01.753315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.089 [2024-11-20 06:40:01.753329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.089 [2024-11-20 06:40:01.753341] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.089 [2024-11-20 06:40:01.765450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.089 [2024-11-20 06:40:01.765853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.089 [2024-11-20 06:40:01.765880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.089 [2024-11-20 06:40:01.765896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.089 [2024-11-20 06:40:01.766117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.089 [2024-11-20 06:40:01.766336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.090 [2024-11-20 06:40:01.766355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.090 [2024-11-20 06:40:01.766367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.090 [2024-11-20 06:40:01.766378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.090 [2024-11-20 06:40:01.778465] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.090 [2024-11-20 06:40:01.778850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 06:40:01.778891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.090 [2024-11-20 06:40:01.778907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.090 [2024-11-20 06:40:01.779126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.090 [2024-11-20 06:40:01.779363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.090 [2024-11-20 06:40:01.779387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.090 [2024-11-20 06:40:01.779400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.090 [2024-11-20 06:40:01.779411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.090 [2024-11-20 06:40:01.791478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.090 [2024-11-20 06:40:01.791889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 06:40:01.791930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.090 [2024-11-20 06:40:01.791946] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.090 [2024-11-20 06:40:01.792180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.090 [2024-11-20 06:40:01.792399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.090 [2024-11-20 06:40:01.792419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.090 [2024-11-20 06:40:01.792431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.090 [2024-11-20 06:40:01.792442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.090 [2024-11-20 06:40:01.804525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.090 [2024-11-20 06:40:01.804861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 06:40:01.804888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.090 [2024-11-20 06:40:01.804903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.090 [2024-11-20 06:40:01.805124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.090 [2024-11-20 06:40:01.805376] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.090 [2024-11-20 06:40:01.805396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.090 [2024-11-20 06:40:01.805408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.090 [2024-11-20 06:40:01.805421] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.090 [2024-11-20 06:40:01.817630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.090 [2024-11-20 06:40:01.817997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 06:40:01.818025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.090 [2024-11-20 06:40:01.818040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.090 [2024-11-20 06:40:01.818253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.090 [2024-11-20 06:40:01.818493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.090 [2024-11-20 06:40:01.818514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.090 [2024-11-20 06:40:01.818527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.090 [2024-11-20 06:40:01.818543] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.090 [2024-11-20 06:40:01.830673] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.090 [2024-11-20 06:40:01.831107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 06:40:01.831150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.090 [2024-11-20 06:40:01.831165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.090 [2024-11-20 06:40:01.831449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.090 [2024-11-20 06:40:01.831662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.090 [2024-11-20 06:40:01.831681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.090 [2024-11-20 06:40:01.831692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.090 [2024-11-20 06:40:01.831703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.090 [2024-11-20 06:40:01.843846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.090 [2024-11-20 06:40:01.844166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 06:40:01.844192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.090 [2024-11-20 06:40:01.844207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.090 [2024-11-20 06:40:01.844457] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.090 [2024-11-20 06:40:01.844671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.090 [2024-11-20 06:40:01.844690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.090 [2024-11-20 06:40:01.844702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.090 [2024-11-20 06:40:01.844713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.090 [2024-11-20 06:40:01.856827] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.090 [2024-11-20 06:40:01.857196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 06:40:01.857239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.090 [2024-11-20 06:40:01.857255] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.090 [2024-11-20 06:40:01.857516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.090 [2024-11-20 06:40:01.857747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.090 [2024-11-20 06:40:01.857767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.090 [2024-11-20 06:40:01.857779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.090 [2024-11-20 06:40:01.857790] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.090 [2024-11-20 06:40:01.869876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.090 [2024-11-20 06:40:01.870249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.090 [2024-11-20 06:40:01.870276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.090 [2024-11-20 06:40:01.870291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.090 [2024-11-20 06:40:01.870548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.090 [2024-11-20 06:40:01.870758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.090 [2024-11-20 06:40:01.870776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.090 [2024-11-20 06:40:01.870788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.090 [2024-11-20 06:40:01.870799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.091 [2024-11-20 06:40:01.882977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.091 [2024-11-20 06:40:01.883343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 06:40:01.883372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.091 [2024-11-20 06:40:01.883388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.091 [2024-11-20 06:40:01.883628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.091 [2024-11-20 06:40:01.883836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.091 [2024-11-20 06:40:01.883855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.091 [2024-11-20 06:40:01.883866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.091 [2024-11-20 06:40:01.883877] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.091 [2024-11-20 06:40:01.896065] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.091 [2024-11-20 06:40:01.896436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 06:40:01.896480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.091 [2024-11-20 06:40:01.896495] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.091 [2024-11-20 06:40:01.896747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.091 [2024-11-20 06:40:01.896954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.091 [2024-11-20 06:40:01.896972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.091 [2024-11-20 06:40:01.896984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.091 [2024-11-20 06:40:01.896995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.091 [2024-11-20 06:40:01.909127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.091 [2024-11-20 06:40:01.909488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.091 [2024-11-20 06:40:01.909517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.091 [2024-11-20 06:40:01.909533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.091 [2024-11-20 06:40:01.909760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.091 [2024-11-20 06:40:01.909968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.091 [2024-11-20 06:40:01.909987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.091 [2024-11-20 06:40:01.909999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.091 [2024-11-20 06:40:01.910010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.350 [2024-11-20 06:40:01.922281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.350 [2024-11-20 06:40:01.922651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.350 [2024-11-20 06:40:01.922697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.350 [2024-11-20 06:40:01.922721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.350 [2024-11-20 06:40:01.922989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.350 [2024-11-20 06:40:01.923181] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.350 [2024-11-20 06:40:01.923200] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.350 [2024-11-20 06:40:01.923212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.350 [2024-11-20 06:40:01.923223] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.350 [2024-11-20 06:40:01.935345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.350 [2024-11-20 06:40:01.935681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.350 [2024-11-20 06:40:01.935709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.350 [2024-11-20 06:40:01.935724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.350 [2024-11-20 06:40:01.935947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.350 [2024-11-20 06:40:01.936155] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.350 [2024-11-20 06:40:01.936174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.350 [2024-11-20 06:40:01.936185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.350 [2024-11-20 06:40:01.936196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.350 [2024-11-20 06:40:01.948576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.350 [2024-11-20 06:40:01.948955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.350 [2024-11-20 06:40:01.948983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.350 [2024-11-20 06:40:01.948998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.350 [2024-11-20 06:40:01.949234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.350 [2024-11-20 06:40:01.949471] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.350 [2024-11-20 06:40:01.949497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.350 [2024-11-20 06:40:01.949511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.350 [2024-11-20 06:40:01.949522] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.350 [2024-11-20 06:40:01.961743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.350 [2024-11-20 06:40:01.962089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.350 [2024-11-20 06:40:01.962116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.350 [2024-11-20 06:40:01.962131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.350 [2024-11-20 06:40:01.962382] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.350 [2024-11-20 06:40:01.962609] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.350 [2024-11-20 06:40:01.962629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.350 [2024-11-20 06:40:01.962642] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.350 [2024-11-20 06:40:01.962653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.350 [2024-11-20 06:40:01.975423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.350 [2024-11-20 06:40:01.975802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.350 [2024-11-20 06:40:01.975830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.350 [2024-11-20 06:40:01.975846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.350 [2024-11-20 06:40:01.976071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.350 [2024-11-20 06:40:01.976282] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.350 [2024-11-20 06:40:01.976329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.350 [2024-11-20 06:40:01.976343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.350 [2024-11-20 06:40:01.976371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.350 [2024-11-20 06:40:01.988767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.350 [2024-11-20 06:40:01.989261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.350 [2024-11-20 06:40:01.989312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.350 [2024-11-20 06:40:01.989331] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.350 [2024-11-20 06:40:01.989560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.350 [2024-11-20 06:40:01.989789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.350 [2024-11-20 06:40:01.989807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.350 [2024-11-20 06:40:01.989820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.351 [2024-11-20 06:40:01.989835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.351 [2024-11-20 06:40:02.002324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.351 [2024-11-20 06:40:02.002680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.351 [2024-11-20 06:40:02.002708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.351 [2024-11-20 06:40:02.002737] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.351 [2024-11-20 06:40:02.002944] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.351 [2024-11-20 06:40:02.003162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.351 [2024-11-20 06:40:02.003181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.351 [2024-11-20 06:40:02.003193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.351 [2024-11-20 06:40:02.003205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.351 [2024-11-20 06:40:02.015892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.351 [2024-11-20 06:40:02.016284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.351 [2024-11-20 06:40:02.016337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.351 [2024-11-20 06:40:02.016355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.351 [2024-11-20 06:40:02.016608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.351 [2024-11-20 06:40:02.016838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.351 [2024-11-20 06:40:02.016857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.351 [2024-11-20 06:40:02.016869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.351 [2024-11-20 06:40:02.016880] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.351 [2024-11-20 06:40:02.029138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.351 [2024-11-20 06:40:02.029537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.351 [2024-11-20 06:40:02.029566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.351 [2024-11-20 06:40:02.029582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.351 [2024-11-20 06:40:02.029822] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.351 [2024-11-20 06:40:02.030016] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.351 [2024-11-20 06:40:02.030034] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.351 [2024-11-20 06:40:02.030046] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.351 [2024-11-20 06:40:02.030058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.351 [2024-11-20 06:40:02.042421] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.351 [2024-11-20 06:40:02.042803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.351 [2024-11-20 06:40:02.042834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.351 [2024-11-20 06:40:02.042850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.351 [2024-11-20 06:40:02.043082] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.351 [2024-11-20 06:40:02.043276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.351 [2024-11-20 06:40:02.043321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.351 [2024-11-20 06:40:02.043335] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.351 [2024-11-20 06:40:02.043346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.351 [2024-11-20 06:40:02.055625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.351 [2024-11-20 06:40:02.056041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.351 [2024-11-20 06:40:02.056108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.351 [2024-11-20 06:40:02.056123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.351 [2024-11-20 06:40:02.056381] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.351 [2024-11-20 06:40:02.056580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.351 [2024-11-20 06:40:02.056599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.351 [2024-11-20 06:40:02.056611] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.351 [2024-11-20 06:40:02.056637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.351 [2024-11-20 06:40:02.068975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.351 [2024-11-20 06:40:02.069411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.351 [2024-11-20 06:40:02.069441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.351 [2024-11-20 06:40:02.069457] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.351 [2024-11-20 06:40:02.069684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.351 [2024-11-20 06:40:02.069891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.351 [2024-11-20 06:40:02.069910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.351 [2024-11-20 06:40:02.069922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.351 [2024-11-20 06:40:02.069932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.351 [2024-11-20 06:40:02.082225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.351 [2024-11-20 06:40:02.082599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.351 [2024-11-20 06:40:02.082628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.351 [2024-11-20 06:40:02.082644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.351 [2024-11-20 06:40:02.082887] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.351 [2024-11-20 06:40:02.083113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.351 [2024-11-20 06:40:02.083132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.351 [2024-11-20 06:40:02.083145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.351 [2024-11-20 06:40:02.083156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.351 [2024-11-20 06:40:02.095453] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.351 [2024-11-20 06:40:02.095860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.351 [2024-11-20 06:40:02.095930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.351 [2024-11-20 06:40:02.095945] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.351 [2024-11-20 06:40:02.096191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.351 [2024-11-20 06:40:02.096410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.351 [2024-11-20 06:40:02.096430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.351 [2024-11-20 06:40:02.096443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.351 [2024-11-20 06:40:02.096454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.351 [2024-11-20 06:40:02.108796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.351 [2024-11-20 06:40:02.109204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.351 [2024-11-20 06:40:02.109258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.351 [2024-11-20 06:40:02.109273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.351 [2024-11-20 06:40:02.109549] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.351 [2024-11-20 06:40:02.109759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.351 [2024-11-20 06:40:02.109778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.351 [2024-11-20 06:40:02.109790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.351 [2024-11-20 06:40:02.109801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.351 [2024-11-20 06:40:02.121928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.351 [2024-11-20 06:40:02.122293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.351 [2024-11-20 06:40:02.122329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.351 [2024-11-20 06:40:02.122346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.351 [2024-11-20 06:40:02.122588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.351 [2024-11-20 06:40:02.122797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.351 [2024-11-20 06:40:02.122823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.352 [2024-11-20 06:40:02.122840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.352 [2024-11-20 06:40:02.122859] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.352 [2024-11-20 06:40:02.134923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.352 [2024-11-20 06:40:02.135258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.352 [2024-11-20 06:40:02.135287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.352 [2024-11-20 06:40:02.135309] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.352 [2024-11-20 06:40:02.135568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.352 [2024-11-20 06:40:02.135776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.352 [2024-11-20 06:40:02.135795] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.352 [2024-11-20 06:40:02.135807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.352 [2024-11-20 06:40:02.135818] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.352 [2024-11-20 06:40:02.148190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.352 [2024-11-20 06:40:02.148594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.352 [2024-11-20 06:40:02.148636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.352 [2024-11-20 06:40:02.148650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.352 [2024-11-20 06:40:02.148885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.352 [2024-11-20 06:40:02.149093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.352 [2024-11-20 06:40:02.149112] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.352 [2024-11-20 06:40:02.149124] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.352 [2024-11-20 06:40:02.149135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.352 [2024-11-20 06:40:02.161222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.352 [2024-11-20 06:40:02.161744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.352 [2024-11-20 06:40:02.161772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.352 [2024-11-20 06:40:02.161803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.352 [2024-11-20 06:40:02.162053] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.352 [2024-11-20 06:40:02.162260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.352 [2024-11-20 06:40:02.162279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.352 [2024-11-20 06:40:02.162291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.352 [2024-11-20 06:40:02.162310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.352 [2024-11-20 06:40:02.174268] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.352 [2024-11-20 06:40:02.174646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.352 [2024-11-20 06:40:02.174688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.352 [2024-11-20 06:40:02.174703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.352 [2024-11-20 06:40:02.174950] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.352 [2024-11-20 06:40:02.175143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.352 [2024-11-20 06:40:02.175161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.352 [2024-11-20 06:40:02.175173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.352 [2024-11-20 06:40:02.175183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.611 [2024-11-20 06:40:02.187704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.611 [2024-11-20 06:40:02.188074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.611 [2024-11-20 06:40:02.188145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.611 [2024-11-20 06:40:02.188161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.611 [2024-11-20 06:40:02.188387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.611 [2024-11-20 06:40:02.188620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.611 [2024-11-20 06:40:02.188655] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.611 [2024-11-20 06:40:02.188668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.611 [2024-11-20 06:40:02.188679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.611 [2024-11-20 06:40:02.200776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.611 [2024-11-20 06:40:02.201251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.611 [2024-11-20 06:40:02.201310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.611 [2024-11-20 06:40:02.201327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.611 [2024-11-20 06:40:02.201568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.611 [2024-11-20 06:40:02.201760] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.611 [2024-11-20 06:40:02.201779] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.611 [2024-11-20 06:40:02.201791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.611 [2024-11-20 06:40:02.201802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.611 [2024-11-20 06:40:02.213800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.611 [2024-11-20 06:40:02.214299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.611 [2024-11-20 06:40:02.214363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.611 [2024-11-20 06:40:02.214380] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.611 [2024-11-20 06:40:02.214594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.611 [2024-11-20 06:40:02.214832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.611 [2024-11-20 06:40:02.214852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.611 [2024-11-20 06:40:02.214879] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.611 [2024-11-20 06:40:02.214890] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.611 7097.33 IOPS, 27.72 MiB/s [2024-11-20T05:40:02.447Z] [2024-11-20 06:40:02.228440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.611 [2024-11-20 06:40:02.228792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.611 [2024-11-20 06:40:02.228820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.611 [2024-11-20 06:40:02.228836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.611 [2024-11-20 06:40:02.229060] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.611 [2024-11-20 06:40:02.229270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.611 [2024-11-20 06:40:02.229311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.611 [2024-11-20 06:40:02.229326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.611 [2024-11-20 06:40:02.229338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.611 [2024-11-20 06:40:02.241579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.611 [2024-11-20 06:40:02.241970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.611 [2024-11-20 06:40:02.241999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.611 [2024-11-20 06:40:02.242014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.611 [2024-11-20 06:40:02.242254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.611 [2024-11-20 06:40:02.242495] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.611 [2024-11-20 06:40:02.242515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.611 [2024-11-20 06:40:02.242527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.611 [2024-11-20 06:40:02.242539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.611 [2024-11-20 06:40:02.254797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.611 [2024-11-20 06:40:02.255167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.611 [2024-11-20 06:40:02.255196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.611 [2024-11-20 06:40:02.255211] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.611 [2024-11-20 06:40:02.255464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.611 [2024-11-20 06:40:02.255697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.611 [2024-11-20 06:40:02.255716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.611 [2024-11-20 06:40:02.255727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.611 [2024-11-20 06:40:02.255738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.612 [2024-11-20 06:40:02.267799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.612 [2024-11-20 06:40:02.268105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.612 [2024-11-20 06:40:02.268146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.612 [2024-11-20 06:40:02.268161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.612 [2024-11-20 06:40:02.268389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.612 [2024-11-20 06:40:02.268611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.612 [2024-11-20 06:40:02.268645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.612 [2024-11-20 06:40:02.268658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.612 [2024-11-20 06:40:02.268669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.612 [2024-11-20 06:40:02.280917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.612 [2024-11-20 06:40:02.281256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.612 [2024-11-20 06:40:02.281283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.612 [2024-11-20 06:40:02.281298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.612 [2024-11-20 06:40:02.281552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.612 [2024-11-20 06:40:02.281779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.612 [2024-11-20 06:40:02.281798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.612 [2024-11-20 06:40:02.281810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.612 [2024-11-20 06:40:02.281821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.612 [2024-11-20 06:40:02.293934] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.612 [2024-11-20 06:40:02.294297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.612 [2024-11-20 06:40:02.294345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.612 [2024-11-20 06:40:02.294360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.612 [2024-11-20 06:40:02.294607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.612 [2024-11-20 06:40:02.294815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.612 [2024-11-20 06:40:02.294834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.612 [2024-11-20 06:40:02.294851] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.612 [2024-11-20 06:40:02.294863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.612 [2024-11-20 06:40:02.307029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.612 [2024-11-20 06:40:02.307455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.612 [2024-11-20 06:40:02.307498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.612 [2024-11-20 06:40:02.307514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.612 [2024-11-20 06:40:02.307753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.612 [2024-11-20 06:40:02.307962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.612 [2024-11-20 06:40:02.307980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.612 [2024-11-20 06:40:02.307991] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.612 [2024-11-20 06:40:02.308002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.612 [2024-11-20 06:40:02.320077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.612 [2024-11-20 06:40:02.320413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.612 [2024-11-20 06:40:02.320441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.612 [2024-11-20 06:40:02.320456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.612 [2024-11-20 06:40:02.320677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.612 [2024-11-20 06:40:02.320886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.612 [2024-11-20 06:40:02.320904] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.612 [2024-11-20 06:40:02.320916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.612 [2024-11-20 06:40:02.320927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.612 [2024-11-20 06:40:02.333155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.612 [2024-11-20 06:40:02.333587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.612 [2024-11-20 06:40:02.333614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.612 [2024-11-20 06:40:02.333646] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.612 [2024-11-20 06:40:02.333885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.612 [2024-11-20 06:40:02.334092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.612 [2024-11-20 06:40:02.334111] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.612 [2024-11-20 06:40:02.334123] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.612 [2024-11-20 06:40:02.334134] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.612 [2024-11-20 06:40:02.346237] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.612 [2024-11-20 06:40:02.346599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.612 [2024-11-20 06:40:02.346628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.612 [2024-11-20 06:40:02.346645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.612 [2024-11-20 06:40:02.346883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.612 [2024-11-20 06:40:02.347091] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.612 [2024-11-20 06:40:02.347110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.612 [2024-11-20 06:40:02.347121] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.612 [2024-11-20 06:40:02.347133] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.612 [2024-11-20 06:40:02.359334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.612 [2024-11-20 06:40:02.359635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.612 [2024-11-20 06:40:02.359677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.612 [2024-11-20 06:40:02.359692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.612 [2024-11-20 06:40:02.359907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.612 [2024-11-20 06:40:02.360116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.612 [2024-11-20 06:40:02.360134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.612 [2024-11-20 06:40:02.360146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.612 [2024-11-20 06:40:02.360157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.612 [2024-11-20 06:40:02.372567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.612 [2024-11-20 06:40:02.372914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.612 [2024-11-20 06:40:02.372942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.612 [2024-11-20 06:40:02.372957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.612 [2024-11-20 06:40:02.373178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.612 [2024-11-20 06:40:02.373415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.612 [2024-11-20 06:40:02.373435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.612 [2024-11-20 06:40:02.373448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.612 [2024-11-20 06:40:02.373460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.612 [2024-11-20 06:40:02.385712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.612 [2024-11-20 06:40:02.386148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.612 [2024-11-20 06:40:02.386181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.612 [2024-11-20 06:40:02.386198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.612 [2024-11-20 06:40:02.386458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.612 [2024-11-20 06:40:02.386673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.612 [2024-11-20 06:40:02.386692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.612 [2024-11-20 06:40:02.386704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.613 [2024-11-20 06:40:02.386715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.613 [2024-11-20 06:40:02.398976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.613 [2024-11-20 06:40:02.399338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.613 [2024-11-20 06:40:02.399366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.613 [2024-11-20 06:40:02.399382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.613 [2024-11-20 06:40:02.399623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.613 [2024-11-20 06:40:02.399831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.613 [2024-11-20 06:40:02.399851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.613 [2024-11-20 06:40:02.399863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.613 [2024-11-20 06:40:02.399874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.613 [2024-11-20 06:40:02.412111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.613 [2024-11-20 06:40:02.412483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.613 [2024-11-20 06:40:02.412527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.613 [2024-11-20 06:40:02.412543] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.613 [2024-11-20 06:40:02.412795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.613 [2024-11-20 06:40:02.413002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.613 [2024-11-20 06:40:02.413020] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.613 [2024-11-20 06:40:02.413031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.613 [2024-11-20 06:40:02.413043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.613 [2024-11-20 06:40:02.425392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.613 [2024-11-20 06:40:02.425779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.613 [2024-11-20 06:40:02.425806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.613 [2024-11-20 06:40:02.425822] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.613 [2024-11-20 06:40:02.426056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.613 [2024-11-20 06:40:02.426270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.613 [2024-11-20 06:40:02.426315] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.613 [2024-11-20 06:40:02.426330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.613 [2024-11-20 06:40:02.426342] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.613 [2024-11-20 06:40:02.438525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.613 [2024-11-20 06:40:02.439041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.613 [2024-11-20 06:40:02.439084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.613 [2024-11-20 06:40:02.439100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.613 [2024-11-20 06:40:02.439347] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.613 [2024-11-20 06:40:02.439569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.613 [2024-11-20 06:40:02.439589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.613 [2024-11-20 06:40:02.439601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.613 [2024-11-20 06:40:02.439612] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-20 06:40:02.452147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-20 06:40:02.452478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-20 06:40:02.452521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-20 06:40:02.452537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.873 [2024-11-20 06:40:02.452775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.873 [2024-11-20 06:40:02.452984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-20 06:40:02.453002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-20 06:40:02.453013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-20 06:40:02.453024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-20 06:40:02.465423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-20 06:40:02.465777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-20 06:40:02.465805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-20 06:40:02.465820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.873 [2024-11-20 06:40:02.466049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.873 [2024-11-20 06:40:02.466316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-20 06:40:02.466339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-20 06:40:02.466358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-20 06:40:02.466372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-20 06:40:02.478774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-20 06:40:02.479182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-20 06:40:02.479210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-20 06:40:02.479227] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.873 [2024-11-20 06:40:02.479451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.873 [2024-11-20 06:40:02.479706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-20 06:40:02.479726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-20 06:40:02.479738] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-20 06:40:02.479750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-20 06:40:02.492089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-20 06:40:02.492508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-20 06:40:02.492537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-20 06:40:02.492553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.873 [2024-11-20 06:40:02.492794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.873 [2024-11-20 06:40:02.492993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-20 06:40:02.493012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.873 [2024-11-20 06:40:02.493024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.873 [2024-11-20 06:40:02.493036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.873 [2024-11-20 06:40:02.505367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.873 [2024-11-20 06:40:02.505798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.873 [2024-11-20 06:40:02.505841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.873 [2024-11-20 06:40:02.505857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.873 [2024-11-20 06:40:02.506097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.873 [2024-11-20 06:40:02.506322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.873 [2024-11-20 06:40:02.506343] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-20 06:40:02.506370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-20 06:40:02.506382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-20 06:40:02.518639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-20 06:40:02.518997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-20 06:40:02.519025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-20 06:40:02.519041] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.874 [2024-11-20 06:40:02.519269] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.874 [2024-11-20 06:40:02.519512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-20 06:40:02.519532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-20 06:40:02.519545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-20 06:40:02.519557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-20 06:40:02.531937] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-20 06:40:02.532258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-20 06:40:02.532300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-20 06:40:02.532341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.874 [2024-11-20 06:40:02.532570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.874 [2024-11-20 06:40:02.532787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-20 06:40:02.532807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-20 06:40:02.532818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-20 06:40:02.532830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-20 06:40:02.545230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-20 06:40:02.545584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-20 06:40:02.545613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-20 06:40:02.545628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.874 [2024-11-20 06:40:02.545865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.874 [2024-11-20 06:40:02.546081] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-20 06:40:02.546100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-20 06:40:02.546112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-20 06:40:02.546123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-20 06:40:02.558434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-20 06:40:02.558858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-20 06:40:02.558886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-20 06:40:02.558907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.874 [2024-11-20 06:40:02.559136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.874 [2024-11-20 06:40:02.559396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-20 06:40:02.559417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-20 06:40:02.559430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-20 06:40:02.559442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-20 06:40:02.571676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-20 06:40:02.572043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-20 06:40:02.572085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-20 06:40:02.572101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.874 [2024-11-20 06:40:02.572369] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.874 [2024-11-20 06:40:02.572588] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-20 06:40:02.572628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-20 06:40:02.572641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-20 06:40:02.572653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-20 06:40:02.584873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-20 06:40:02.585183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-20 06:40:02.585209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-20 06:40:02.585223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.874 [2024-11-20 06:40:02.585487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.874 [2024-11-20 06:40:02.585725] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-20 06:40:02.585745] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-20 06:40:02.585757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-20 06:40:02.585768] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-20 06:40:02.598166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-20 06:40:02.598611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-20 06:40:02.598639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-20 06:40:02.598655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.874 [2024-11-20 06:40:02.598883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.874 [2024-11-20 06:40:02.599103] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-20 06:40:02.599122] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-20 06:40:02.599134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-20 06:40:02.599145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-20 06:40:02.611451] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.874 [2024-11-20 06:40:02.611874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.874 [2024-11-20 06:40:02.611901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.874 [2024-11-20 06:40:02.611915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.874 [2024-11-20 06:40:02.612151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.874 [2024-11-20 06:40:02.612410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.874 [2024-11-20 06:40:02.612431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.874 [2024-11-20 06:40:02.612444] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.874 [2024-11-20 06:40:02.612456] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.874 [2024-11-20 06:40:02.624749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.875 [2024-11-20 06:40:02.625119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-11-20 06:40:02.625162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.875 [2024-11-20 06:40:02.625178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.875 [2024-11-20 06:40:02.625429] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.875 [2024-11-20 06:40:02.625669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.875 [2024-11-20 06:40:02.625689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.875 [2024-11-20 06:40:02.625701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.875 [2024-11-20 06:40:02.625712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.875 [2024-11-20 06:40:02.638072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.875 [2024-11-20 06:40:02.638466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-11-20 06:40:02.638495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.875 [2024-11-20 06:40:02.638510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.875 [2024-11-20 06:40:02.638749] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.875 [2024-11-20 06:40:02.638947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.875 [2024-11-20 06:40:02.638966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.875 [2024-11-20 06:40:02.638983] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.875 [2024-11-20 06:40:02.638995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.875 [2024-11-20 06:40:02.651338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.875 [2024-11-20 06:40:02.651770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-11-20 06:40:02.651798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.875 [2024-11-20 06:40:02.651813] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.875 [2024-11-20 06:40:02.652055] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.875 [2024-11-20 06:40:02.652254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.875 [2024-11-20 06:40:02.652273] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.875 [2024-11-20 06:40:02.652300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.875 [2024-11-20 06:40:02.652323] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.875 [2024-11-20 06:40:02.664625] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.875 [2024-11-20 06:40:02.665061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-11-20 06:40:02.665089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.875 [2024-11-20 06:40:02.665105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.875 [2024-11-20 06:40:02.665343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.875 [2024-11-20 06:40:02.665555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.875 [2024-11-20 06:40:02.665575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.875 [2024-11-20 06:40:02.665603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.875 [2024-11-20 06:40:02.665615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.875 [2024-11-20 06:40:02.677853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.875 [2024-11-20 06:40:02.678246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-11-20 06:40:02.678288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.875 [2024-11-20 06:40:02.678312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.875 [2024-11-20 06:40:02.678544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.875 [2024-11-20 06:40:02.678777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.875 [2024-11-20 06:40:02.678796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.875 [2024-11-20 06:40:02.678808] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.875 [2024-11-20 06:40:02.678819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.875 [2024-11-20 06:40:02.691051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.875 [2024-11-20 06:40:02.691422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-11-20 06:40:02.691451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.875 [2024-11-20 06:40:02.691466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.875 [2024-11-20 06:40:02.691707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.875 [2024-11-20 06:40:02.691907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.875 [2024-11-20 06:40:02.691926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.875 [2024-11-20 06:40:02.691937] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.875 [2024-11-20 06:40:02.691948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:30.875 [2024-11-20 06:40:02.704602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:30.875 [2024-11-20 06:40:02.704903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:30.875 [2024-11-20 06:40:02.704947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:30.875 [2024-11-20 06:40:02.704963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:30.875 [2024-11-20 06:40:02.705176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:30.875 [2024-11-20 06:40:02.705436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:30.875 [2024-11-20 06:40:02.705459] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:30.875 [2024-11-20 06:40:02.705472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:30.875 [2024-11-20 06:40:02.705485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-20 06:40:02.718024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-20 06:40:02.718402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-20 06:40:02.718430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-20 06:40:02.718447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.139 [2024-11-20 06:40:02.718675] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.139 [2024-11-20 06:40:02.718913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-20 06:40:02.718934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-20 06:40:02.718947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-20 06:40:02.718959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-20 06:40:02.731677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-20 06:40:02.732048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-20 06:40:02.732076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-20 06:40:02.732097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.139 [2024-11-20 06:40:02.732336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.139 [2024-11-20 06:40:02.732548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-20 06:40:02.732568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-20 06:40:02.732581] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-20 06:40:02.732593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-20 06:40:02.744912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-20 06:40:02.745349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-20 06:40:02.745378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-20 06:40:02.745394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.139 [2024-11-20 06:40:02.745635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.139 [2024-11-20 06:40:02.745857] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-20 06:40:02.745878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-20 06:40:02.745890] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-20 06:40:02.745902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-20 06:40:02.758148] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-20 06:40:02.758590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-20 06:40:02.758619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-20 06:40:02.758635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.139 [2024-11-20 06:40:02.758876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.139 [2024-11-20 06:40:02.759095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-20 06:40:02.759114] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-20 06:40:02.759127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-20 06:40:02.759139] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-20 06:40:02.771399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-20 06:40:02.771798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-20 06:40:02.771841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-20 06:40:02.771856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.139 [2024-11-20 06:40:02.772124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.139 [2024-11-20 06:40:02.772406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-20 06:40:02.772429] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-20 06:40:02.772443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-20 06:40:02.772455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-20 06:40:02.784728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.139 [2024-11-20 06:40:02.785102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.139 [2024-11-20 06:40:02.785130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.139 [2024-11-20 06:40:02.785146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.139 [2024-11-20 06:40:02.785384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.139 [2024-11-20 06:40:02.785612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.139 [2024-11-20 06:40:02.785631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.139 [2024-11-20 06:40:02.785643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.139 [2024-11-20 06:40:02.785655] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.139 [2024-11-20 06:40:02.798062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-20 06:40:02.798509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-20 06:40:02.798537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-20 06:40:02.798553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.140 [2024-11-20 06:40:02.798785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.140 [2024-11-20 06:40:02.798999] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-20 06:40:02.799018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-20 06:40:02.799030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-20 06:40:02.799041] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-20 06:40:02.811381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-20 06:40:02.811756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-20 06:40:02.811784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-20 06:40:02.811800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.140 [2024-11-20 06:40:02.812030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.140 [2024-11-20 06:40:02.812245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-20 06:40:02.812264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-20 06:40:02.812295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-20 06:40:02.812317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-20 06:40:02.824728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-20 06:40:02.825102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-20 06:40:02.825145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-20 06:40:02.825161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.140 [2024-11-20 06:40:02.825446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.140 [2024-11-20 06:40:02.825658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-20 06:40:02.825679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-20 06:40:02.825692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-20 06:40:02.825704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-20 06:40:02.837958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-20 06:40:02.838332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-20 06:40:02.838360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-20 06:40:02.838376] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.140 [2024-11-20 06:40:02.838605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.140 [2024-11-20 06:40:02.838820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-20 06:40:02.838840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-20 06:40:02.838852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-20 06:40:02.838863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-20 06:40:02.851165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-20 06:40:02.851511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-20 06:40:02.851539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-20 06:40:02.851555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.140 [2024-11-20 06:40:02.851776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.140 [2024-11-20 06:40:02.851991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-20 06:40:02.852011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-20 06:40:02.852023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-20 06:40:02.852034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-20 06:40:02.864401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-20 06:40:02.864829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-20 06:40:02.864855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-20 06:40:02.864869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.140 [2024-11-20 06:40:02.865105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.140 [2024-11-20 06:40:02.865345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-20 06:40:02.865381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-20 06:40:02.865394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-20 06:40:02.865406] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-20 06:40:02.877670] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-20 06:40:02.877981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-20 06:40:02.878007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-20 06:40:02.878022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.140 [2024-11-20 06:40:02.878222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.140 [2024-11-20 06:40:02.878487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-20 06:40:02.878508] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-20 06:40:02.878521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-20 06:40:02.878533] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-20 06:40:02.890972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-20 06:40:02.891348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-20 06:40:02.891377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-20 06:40:02.891393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.140 [2024-11-20 06:40:02.891635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.140 [2024-11-20 06:40:02.891849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.140 [2024-11-20 06:40:02.891869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.140 [2024-11-20 06:40:02.891881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.140 [2024-11-20 06:40:02.891893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.140 [2024-11-20 06:40:02.904236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.140 [2024-11-20 06:40:02.904623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.140 [2024-11-20 06:40:02.904652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.140 [2024-11-20 06:40:02.904673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.140 [2024-11-20 06:40:02.904901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.140 [2024-11-20 06:40:02.905116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.141 [2024-11-20 06:40:02.905135] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.141 [2024-11-20 06:40:02.905147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.141 [2024-11-20 06:40:02.905159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.141 [2024-11-20 06:40:02.917463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.141 [2024-11-20 06:40:02.917854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.141 [2024-11-20 06:40:02.917882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.141 [2024-11-20 06:40:02.917897] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.141 [2024-11-20 06:40:02.918132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.141 [2024-11-20 06:40:02.918374] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.141 [2024-11-20 06:40:02.918395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.141 [2024-11-20 06:40:02.918408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.141 [2024-11-20 06:40:02.918420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.141 [2024-11-20 06:40:02.930669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.141 [2024-11-20 06:40:02.931000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.141 [2024-11-20 06:40:02.931028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.141 [2024-11-20 06:40:02.931044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.141 [2024-11-20 06:40:02.931267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.141 [2024-11-20 06:40:02.931502] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.141 [2024-11-20 06:40:02.931523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.141 [2024-11-20 06:40:02.931536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.141 [2024-11-20 06:40:02.931548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.141 [2024-11-20 06:40:02.943957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.141 [2024-11-20 06:40:02.944329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.141 [2024-11-20 06:40:02.944372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.141 [2024-11-20 06:40:02.944387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.141 [2024-11-20 06:40:02.944634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.141 [2024-11-20 06:40:02.944838] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.141 [2024-11-20 06:40:02.944857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.141 [2024-11-20 06:40:02.944869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.141 [2024-11-20 06:40:02.944881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.141 [2024-11-20 06:40:02.957109] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.141 [2024-11-20 06:40:02.957468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.141 [2024-11-20 06:40:02.957497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.141 [2024-11-20 06:40:02.957513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.141 [2024-11-20 06:40:02.957741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.141 [2024-11-20 06:40:02.957955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.141 [2024-11-20 06:40:02.957975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.141 [2024-11-20 06:40:02.957987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.141 [2024-11-20 06:40:02.957998] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.461 [2024-11-20 06:40:02.970785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.461 [2024-11-20 06:40:02.971128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.461 [2024-11-20 06:40:02.971156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.461 [2024-11-20 06:40:02.971171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.461 [2024-11-20 06:40:02.971394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.461 [2024-11-20 06:40:02.971612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.461 [2024-11-20 06:40:02.971633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.461 [2024-11-20 06:40:02.971646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.461 [2024-11-20 06:40:02.971659] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.462 [2024-11-20 06:40:02.984468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.462 [2024-11-20 06:40:02.984913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-11-20 06:40:02.984941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.462 [2024-11-20 06:40:02.984957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.462 [2024-11-20 06:40:02.985187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.462 [2024-11-20 06:40:02.985442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.462 [2024-11-20 06:40:02.985465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.462 [2024-11-20 06:40:02.985478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.462 [2024-11-20 06:40:02.985496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.462 [2024-11-20 06:40:02.997896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.462 [2024-11-20 06:40:02.998268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-11-20 06:40:02.998296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.462 [2024-11-20 06:40:02.998322] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.462 [2024-11-20 06:40:02.998537] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.462 [2024-11-20 06:40:02.998755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.462 [2024-11-20 06:40:02.998774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.462 [2024-11-20 06:40:02.998786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.462 [2024-11-20 06:40:02.998797] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.462 [2024-11-20 06:40:03.011187] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.462 [2024-11-20 06:40:03.011583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-11-20 06:40:03.011613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.462 [2024-11-20 06:40:03.011629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.462 [2024-11-20 06:40:03.011861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.462 [2024-11-20 06:40:03.012076] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.462 [2024-11-20 06:40:03.012097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.462 [2024-11-20 06:40:03.012109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.462 [2024-11-20 06:40:03.012121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.462 [2024-11-20 06:40:03.024656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.462 [2024-11-20 06:40:03.025031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-11-20 06:40:03.025059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.462 [2024-11-20 06:40:03.025075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.462 [2024-11-20 06:40:03.025312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.462 [2024-11-20 06:40:03.025538] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.462 [2024-11-20 06:40:03.025560] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.462 [2024-11-20 06:40:03.025587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.462 [2024-11-20 06:40:03.025600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.462 [2024-11-20 06:40:03.038074] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.462 [2024-11-20 06:40:03.038430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-11-20 06:40:03.038460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.462 [2024-11-20 06:40:03.038476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.462 [2024-11-20 06:40:03.038718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.462 [2024-11-20 06:40:03.038933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.462 [2024-11-20 06:40:03.038953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.462 [2024-11-20 06:40:03.038966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.462 [2024-11-20 06:40:03.038977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.462 [2024-11-20 06:40:03.051462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.462 [2024-11-20 06:40:03.051872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-11-20 06:40:03.051901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.462 [2024-11-20 06:40:03.051918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.462 [2024-11-20 06:40:03.052159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.462 [2024-11-20 06:40:03.052399] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.462 [2024-11-20 06:40:03.052420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.462 [2024-11-20 06:40:03.052433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.462 [2024-11-20 06:40:03.052445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.462 [2024-11-20 06:40:03.064825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.462 [2024-11-20 06:40:03.065173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-11-20 06:40:03.065202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.462 [2024-11-20 06:40:03.065218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.462 [2024-11-20 06:40:03.065443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.462 [2024-11-20 06:40:03.065684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.462 [2024-11-20 06:40:03.065703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.462 [2024-11-20 06:40:03.065715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.462 [2024-11-20 06:40:03.065726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.462 [2024-11-20 06:40:03.078141] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.462 [2024-11-20 06:40:03.078534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-11-20 06:40:03.078563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.462 [2024-11-20 06:40:03.078579] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.462 [2024-11-20 06:40:03.078823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.462 [2024-11-20 06:40:03.079021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.462 [2024-11-20 06:40:03.079041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.462 [2024-11-20 06:40:03.079053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.462 [2024-11-20 06:40:03.079064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.462 [2024-11-20 06:40:03.091483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.462 [2024-11-20 06:40:03.091833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-11-20 06:40:03.091861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.462 [2024-11-20 06:40:03.091876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.462 [2024-11-20 06:40:03.092101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.462 [2024-11-20 06:40:03.092367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.462 [2024-11-20 06:40:03.092397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.462 [2024-11-20 06:40:03.092414] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.462 [2024-11-20 06:40:03.092426] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.462 [2024-11-20 06:40:03.104745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.462 [2024-11-20 06:40:03.105062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.462 [2024-11-20 06:40:03.105104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.462 [2024-11-20 06:40:03.105119] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.462 [2024-11-20 06:40:03.105359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.462 [2024-11-20 06:40:03.105587] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.462 [2024-11-20 06:40:03.105607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.463 [2024-11-20 06:40:03.105634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.463 [2024-11-20 06:40:03.105645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.463 [2024-11-20 06:40:03.117928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.463 [2024-11-20 06:40:03.118332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-11-20 06:40:03.118375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.463 [2024-11-20 06:40:03.118392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.463 [2024-11-20 06:40:03.118619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.463 [2024-11-20 06:40:03.118833] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.463 [2024-11-20 06:40:03.118857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.463 [2024-11-20 06:40:03.118870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.463 [2024-11-20 06:40:03.118881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.463 [2024-11-20 06:40:03.131153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.463 [2024-11-20 06:40:03.131510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-11-20 06:40:03.131539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.463 [2024-11-20 06:40:03.131555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.463 [2024-11-20 06:40:03.131777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.463 [2024-11-20 06:40:03.131978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.463 [2024-11-20 06:40:03.131997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.463 [2024-11-20 06:40:03.132009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.463 [2024-11-20 06:40:03.132022] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.463 [2024-11-20 06:40:03.144540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.463 [2024-11-20 06:40:03.144932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-11-20 06:40:03.144961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.463 [2024-11-20 06:40:03.144977] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.463 [2024-11-20 06:40:03.145210] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.463 [2024-11-20 06:40:03.145438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.463 [2024-11-20 06:40:03.145461] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.463 [2024-11-20 06:40:03.145473] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.463 [2024-11-20 06:40:03.145484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.463 [2024-11-20 06:40:03.157956] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.463 [2024-11-20 06:40:03.158308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-11-20 06:40:03.158337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.463 [2024-11-20 06:40:03.158353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.463 [2024-11-20 06:40:03.158602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.463 [2024-11-20 06:40:03.158822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.463 [2024-11-20 06:40:03.158842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.463 [2024-11-20 06:40:03.158855] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.463 [2024-11-20 06:40:03.158871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.463 [2024-11-20 06:40:03.171709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.463 [2024-11-20 06:40:03.172084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-11-20 06:40:03.172128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.463 [2024-11-20 06:40:03.172143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.463 [2024-11-20 06:40:03.172407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.463 [2024-11-20 06:40:03.172612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.463 [2024-11-20 06:40:03.172632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.463 [2024-11-20 06:40:03.172659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.463 [2024-11-20 06:40:03.172671] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.463 [2024-11-20 06:40:03.184979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.463 [2024-11-20 06:40:03.185367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-11-20 06:40:03.185396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.463 [2024-11-20 06:40:03.185413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.463 [2024-11-20 06:40:03.185626] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.463 [2024-11-20 06:40:03.185864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.463 [2024-11-20 06:40:03.185884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.463 [2024-11-20 06:40:03.185896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.463 [2024-11-20 06:40:03.185907] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.463 [2024-11-20 06:40:03.198388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.463 [2024-11-20 06:40:03.198848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-11-20 06:40:03.198876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.463 [2024-11-20 06:40:03.198891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.463 [2024-11-20 06:40:03.199132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.463 [2024-11-20 06:40:03.199360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.463 [2024-11-20 06:40:03.199380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.463 [2024-11-20 06:40:03.199393] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.463 [2024-11-20 06:40:03.199405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.463 [2024-11-20 06:40:03.211717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.463 [2024-11-20 06:40:03.212152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-11-20 06:40:03.212180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.463 [2024-11-20 06:40:03.212196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.463 [2024-11-20 06:40:03.212447] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.463 [2024-11-20 06:40:03.212667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.463 [2024-11-20 06:40:03.212686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.463 [2024-11-20 06:40:03.212698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.463 [2024-11-20 06:40:03.212709] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.463 [2024-11-20 06:40:03.224979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.463 [2024-11-20 06:40:03.225353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-11-20 06:40:03.225382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.463 [2024-11-20 06:40:03.225399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.463 [2024-11-20 06:40:03.225628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.463 [2024-11-20 06:40:03.225866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.463 [2024-11-20 06:40:03.225888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.463 [2024-11-20 06:40:03.225901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.463 [2024-11-20 06:40:03.225913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.463 5323.00 IOPS, 20.79 MiB/s [2024-11-20T05:40:03.299Z] [2024-11-20 06:40:03.238375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.463 [2024-11-20 06:40:03.238804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.463 [2024-11-20 06:40:03.238846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.463 [2024-11-20 06:40:03.238862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.463 [2024-11-20 06:40:03.239102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.464 [2024-11-20 06:40:03.239327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.464 [2024-11-20 06:40:03.239348] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.464 [2024-11-20 06:40:03.239361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.464 [2024-11-20 06:40:03.239374] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.464 [2024-11-20 06:40:03.251707] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.464 [2024-11-20 06:40:03.252079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-11-20 06:40:03.252122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.464 [2024-11-20 06:40:03.252139] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.464 [2024-11-20 06:40:03.252428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.464 [2024-11-20 06:40:03.252648] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.464 [2024-11-20 06:40:03.252667] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.464 [2024-11-20 06:40:03.252679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.464 [2024-11-20 06:40:03.252690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.464 [2024-11-20 06:40:03.265222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.464 [2024-11-20 06:40:03.265584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.464 [2024-11-20 06:40:03.265614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.464 [2024-11-20 06:40:03.265630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.464 [2024-11-20 06:40:03.265844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.464 [2024-11-20 06:40:03.266083] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.464 [2024-11-20 06:40:03.266104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.464 [2024-11-20 06:40:03.266116] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.464 [2024-11-20 06:40:03.266128] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.748 [2024-11-20 06:40:03.278811] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.748 [2024-11-20 06:40:03.279179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-11-20 06:40:03.279208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.748 [2024-11-20 06:40:03.279224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.748 [2024-11-20 06:40:03.279446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.748 [2024-11-20 06:40:03.279694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.748 [2024-11-20 06:40:03.279715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.748 [2024-11-20 06:40:03.279729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.748 [2024-11-20 06:40:03.279742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.748 [2024-11-20 06:40:03.292107] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.748 [2024-11-20 06:40:03.292481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-11-20 06:40:03.292509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.748 [2024-11-20 06:40:03.292525] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.748 [2024-11-20 06:40:03.292777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.748 [2024-11-20 06:40:03.292975] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.748 [2024-11-20 06:40:03.292999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.748 [2024-11-20 06:40:03.293013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.748 [2024-11-20 06:40:03.293024] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.748 [2024-11-20 06:40:03.305456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.748 [2024-11-20 06:40:03.305815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-11-20 06:40:03.305857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.748 [2024-11-20 06:40:03.305873] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.748 [2024-11-20 06:40:03.306093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.748 [2024-11-20 06:40:03.306339] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.748 [2024-11-20 06:40:03.306361] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.748 [2024-11-20 06:40:03.306374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.748 [2024-11-20 06:40:03.306387] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.748 [2024-11-20 06:40:03.318828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.748 [2024-11-20 06:40:03.319214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-11-20 06:40:03.319243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.748 [2024-11-20 06:40:03.319258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.748 [2024-11-20 06:40:03.319495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.748 [2024-11-20 06:40:03.319713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.748 [2024-11-20 06:40:03.319732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.748 [2024-11-20 06:40:03.319745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.748 [2024-11-20 06:40:03.319756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.748 [2024-11-20 06:40:03.332075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.748 [2024-11-20 06:40:03.332464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-11-20 06:40:03.332493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.748 [2024-11-20 06:40:03.332509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.748 [2024-11-20 06:40:03.332737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.748 [2024-11-20 06:40:03.332952] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.748 [2024-11-20 06:40:03.332972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.748 [2024-11-20 06:40:03.332984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.748 [2024-11-20 06:40:03.333000] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.748 [2024-11-20 06:40:03.345252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.748 [2024-11-20 06:40:03.345611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-11-20 06:40:03.345653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.748 [2024-11-20 06:40:03.345669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.748 [2024-11-20 06:40:03.345892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.748 [2024-11-20 06:40:03.346106] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.748 [2024-11-20 06:40:03.346125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.748 [2024-11-20 06:40:03.346137] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.748 [2024-11-20 06:40:03.346148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.748 [2024-11-20 06:40:03.358622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.748 [2024-11-20 06:40:03.359008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-11-20 06:40:03.359036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.748 [2024-11-20 06:40:03.359052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.748 [2024-11-20 06:40:03.359293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.748 [2024-11-20 06:40:03.359520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.748 [2024-11-20 06:40:03.359541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.748 [2024-11-20 06:40:03.359553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.748 [2024-11-20 06:40:03.359565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.748 [2024-11-20 06:40:03.371724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.748 [2024-11-20 06:40:03.372106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-11-20 06:40:03.372147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.748 [2024-11-20 06:40:03.372162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.748 [2024-11-20 06:40:03.372413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.748 [2024-11-20 06:40:03.372612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.748 [2024-11-20 06:40:03.372631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.748 [2024-11-20 06:40:03.372643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.748 [2024-11-20 06:40:03.372654] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.748 [2024-11-20 06:40:03.384716] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.748 [2024-11-20 06:40:03.385089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-11-20 06:40:03.385132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.748 [2024-11-20 06:40:03.385146] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.748 [2024-11-20 06:40:03.385398] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.748 [2024-11-20 06:40:03.385626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.748 [2024-11-20 06:40:03.385660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.748 [2024-11-20 06:40:03.385672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.748 [2024-11-20 06:40:03.385683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.748 [2024-11-20 06:40:03.397796] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.748 [2024-11-20 06:40:03.398160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.748 [2024-11-20 06:40:03.398202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 06:40:03.398217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.749 [2024-11-20 06:40:03.398481] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.749 [2024-11-20 06:40:03.398696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 06:40:03.398715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 06:40:03.398727] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 06:40:03.398738] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 06:40:03.410845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 06:40:03.411145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 06:40:03.411185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 06:40:03.411200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.749 [2024-11-20 06:40:03.411442] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.749 [2024-11-20 06:40:03.411671] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 06:40:03.411690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 06:40:03.411702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 06:40:03.411713] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 06:40:03.423852] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 06:40:03.424233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 06:40:03.424277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 06:40:03.424293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.749 [2024-11-20 06:40:03.424584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.749 [2024-11-20 06:40:03.424792] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 06:40:03.424811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 06:40:03.424823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 06:40:03.424834] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 06:40:03.436936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 06:40:03.437301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 06:40:03.437352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 06:40:03.437367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.749 [2024-11-20 06:40:03.437612] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.749 [2024-11-20 06:40:03.437805] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 06:40:03.437823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 06:40:03.437835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 06:40:03.437846] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 06:40:03.449982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 06:40:03.450473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 06:40:03.450514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 06:40:03.450530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.749 [2024-11-20 06:40:03.450777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.749 [2024-11-20 06:40:03.450970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 06:40:03.450989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 06:40:03.451000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 06:40:03.451011] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 06:40:03.463014] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 06:40:03.463445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 06:40:03.463487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 06:40:03.463503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.749 [2024-11-20 06:40:03.463742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.749 [2024-11-20 06:40:03.463950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 06:40:03.463973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 06:40:03.463986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 06:40:03.463997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 06:40:03.476119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 06:40:03.476517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 06:40:03.476545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 06:40:03.476576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.749 [2024-11-20 06:40:03.476820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.749 [2024-11-20 06:40:03.477029] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 06:40:03.477048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 06:40:03.477060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 06:40:03.477072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 06:40:03.489449] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 06:40:03.489857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 06:40:03.489900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 06:40:03.489915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.749 [2024-11-20 06:40:03.490168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.749 [2024-11-20 06:40:03.490396] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 06:40:03.490417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 06:40:03.490429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 06:40:03.490441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 06:40:03.503103] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 06:40:03.503594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 06:40:03.503626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 06:40:03.503660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.749 [2024-11-20 06:40:03.503900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.749 [2024-11-20 06:40:03.504101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 06:40:03.504120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 06:40:03.504132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 06:40:03.504148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 06:40:03.516250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.749 [2024-11-20 06:40:03.516621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.749 [2024-11-20 06:40:03.516654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.749 [2024-11-20 06:40:03.516686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.749 [2024-11-20 06:40:03.516914] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.749 [2024-11-20 06:40:03.517107] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.749 [2024-11-20 06:40:03.517126] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.749 [2024-11-20 06:40:03.517138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.749 [2024-11-20 06:40:03.517149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.749 [2024-11-20 06:40:03.529479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 06:40:03.529859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 06:40:03.529887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 06:40:03.529902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.750 [2024-11-20 06:40:03.530137] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.750 [2024-11-20 06:40:03.530389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 06:40:03.530409] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 06:40:03.530423] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 06:40:03.530435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 06:40:03.542729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 06:40:03.543119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 06:40:03.543146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 06:40:03.543162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.750 [2024-11-20 06:40:03.543433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.750 [2024-11-20 06:40:03.543662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 06:40:03.543681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 06:40:03.543692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 06:40:03.543704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 06:40:03.555984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 06:40:03.556347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 06:40:03.556395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 06:40:03.556411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.750 [2024-11-20 06:40:03.556660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.750 [2024-11-20 06:40:03.556867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 06:40:03.556886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 06:40:03.556898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 06:40:03.556908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:31.750 [2024-11-20 06:40:03.569010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:31.750 [2024-11-20 06:40:03.569383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:31.750 [2024-11-20 06:40:03.569427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:31.750 [2024-11-20 06:40:03.569442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:31.750 [2024-11-20 06:40:03.569711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:31.750 [2024-11-20 06:40:03.569905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:31.750 [2024-11-20 06:40:03.569923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:31.750 [2024-11-20 06:40:03.569935] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:31.750 [2024-11-20 06:40:03.569946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.010 [2024-11-20 06:40:03.582567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.010 [2024-11-20 06:40:03.582978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.010 [2024-11-20 06:40:03.583005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.010 [2024-11-20 06:40:03.583020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.010 [2024-11-20 06:40:03.583254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.010 [2024-11-20 06:40:03.583504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.010 [2024-11-20 06:40:03.583526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.010 [2024-11-20 06:40:03.583539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.010 [2024-11-20 06:40:03.583551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.010 [2024-11-20 06:40:03.595770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.010 [2024-11-20 06:40:03.596137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.010 [2024-11-20 06:40:03.596180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.010 [2024-11-20 06:40:03.596195] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.010 [2024-11-20 06:40:03.596465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.010 [2024-11-20 06:40:03.596698] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.010 [2024-11-20 06:40:03.596717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.010 [2024-11-20 06:40:03.596729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.010 [2024-11-20 06:40:03.596740] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.010 [2024-11-20 06:40:03.608925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.010 [2024-11-20 06:40:03.609361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.010 [2024-11-20 06:40:03.609402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.010 [2024-11-20 06:40:03.609419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.010 [2024-11-20 06:40:03.609658] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.010 [2024-11-20 06:40:03.609866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.010 [2024-11-20 06:40:03.609885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.010 [2024-11-20 06:40:03.609897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.010 [2024-11-20 06:40:03.609908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.010 [2024-11-20 06:40:03.622233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.010 [2024-11-20 06:40:03.622565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.010 [2024-11-20 06:40:03.622606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.010 [2024-11-20 06:40:03.622622] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.010 [2024-11-20 06:40:03.622842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.010 [2024-11-20 06:40:03.623051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.010 [2024-11-20 06:40:03.623070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.010 [2024-11-20 06:40:03.623082] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.010 [2024-11-20 06:40:03.623093] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.010 [2024-11-20 06:40:03.635338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.010 [2024-11-20 06:40:03.635723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.010 [2024-11-20 06:40:03.635751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.010 [2024-11-20 06:40:03.635767] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.010 [2024-11-20 06:40:03.635987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.010 [2024-11-20 06:40:03.636196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.010 [2024-11-20 06:40:03.636220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.010 [2024-11-20 06:40:03.636233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.010 [2024-11-20 06:40:03.636244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.010 [2024-11-20 06:40:03.648423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.010 [2024-11-20 06:40:03.648761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.010 [2024-11-20 06:40:03.648788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.010 [2024-11-20 06:40:03.648802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.010 [2024-11-20 06:40:03.649016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.010 [2024-11-20 06:40:03.649231] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.010 [2024-11-20 06:40:03.649250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.010 [2024-11-20 06:40:03.649262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.010 [2024-11-20 06:40:03.649272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.010 [2024-11-20 06:40:03.661687] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.010 [2024-11-20 06:40:03.662048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.010 [2024-11-20 06:40:03.662075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.010 [2024-11-20 06:40:03.662091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.010 [2024-11-20 06:40:03.662319] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.010 [2024-11-20 06:40:03.662530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.010 [2024-11-20 06:40:03.662550] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.010 [2024-11-20 06:40:03.662563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.010 [2024-11-20 06:40:03.662574] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.010 [2024-11-20 06:40:03.674808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.010 [2024-11-20 06:40:03.675171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.010 [2024-11-20 06:40:03.675214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.010 [2024-11-20 06:40:03.675230] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.010 [2024-11-20 06:40:03.675492] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.010 [2024-11-20 06:40:03.675722] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.010 [2024-11-20 06:40:03.675741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.011 [2024-11-20 06:40:03.675752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.011 [2024-11-20 06:40:03.675763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.011 [2024-11-20 06:40:03.687855] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.011 [2024-11-20 06:40:03.688345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.011 [2024-11-20 06:40:03.688388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.011 [2024-11-20 06:40:03.688404] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.011 [2024-11-20 06:40:03.688656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.011 [2024-11-20 06:40:03.688863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.011 [2024-11-20 06:40:03.688881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.011 [2024-11-20 06:40:03.688893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.011 [2024-11-20 06:40:03.688904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.011 [2024-11-20 06:40:03.700908] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.011 [2024-11-20 06:40:03.701340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.011 [2024-11-20 06:40:03.701383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.011 [2024-11-20 06:40:03.701398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.011 [2024-11-20 06:40:03.701650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.011 [2024-11-20 06:40:03.701858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.011 [2024-11-20 06:40:03.701876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.011 [2024-11-20 06:40:03.701888] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.011 [2024-11-20 06:40:03.701899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.011 [2024-11-20 06:40:03.714005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.011 [2024-11-20 06:40:03.714368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.011 [2024-11-20 06:40:03.714397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.011 [2024-11-20 06:40:03.714412] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.011 [2024-11-20 06:40:03.714651] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.011 [2024-11-20 06:40:03.714844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.011 [2024-11-20 06:40:03.714863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.011 [2024-11-20 06:40:03.714874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.011 [2024-11-20 06:40:03.714885] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.011 [2024-11-20 06:40:03.727234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.011 [2024-11-20 06:40:03.727611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.011 [2024-11-20 06:40:03.727644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.011 [2024-11-20 06:40:03.727661] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.011 [2024-11-20 06:40:03.727892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.011 [2024-11-20 06:40:03.728137] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.011 [2024-11-20 06:40:03.728158] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.011 [2024-11-20 06:40:03.728171] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.011 [2024-11-20 06:40:03.728183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.011 [2024-11-20 06:40:03.740402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.011 [2024-11-20 06:40:03.740823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.011 [2024-11-20 06:40:03.740863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.011 [2024-11-20 06:40:03.740879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.011 [2024-11-20 06:40:03.741113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.011 [2024-11-20 06:40:03.741333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.011 [2024-11-20 06:40:03.741354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.011 [2024-11-20 06:40:03.741367] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.011 [2024-11-20 06:40:03.741379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.011 [2024-11-20 06:40:03.753479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.011 [2024-11-20 06:40:03.753908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.011 [2024-11-20 06:40:03.753951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.011 [2024-11-20 06:40:03.753967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.011 [2024-11-20 06:40:03.754206] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.011 [2024-11-20 06:40:03.754446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.011 [2024-11-20 06:40:03.754466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.011 [2024-11-20 06:40:03.754479] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.011 [2024-11-20 06:40:03.754490] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.011 [2024-11-20 06:40:03.766562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.011 [2024-11-20 06:40:03.766925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.011 [2024-11-20 06:40:03.766952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.011 [2024-11-20 06:40:03.766967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.011 [2024-11-20 06:40:03.767205] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.011 [2024-11-20 06:40:03.767443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.011 [2024-11-20 06:40:03.767463] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.011 [2024-11-20 06:40:03.767475] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.011 [2024-11-20 06:40:03.767486] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.011 [2024-11-20 06:40:03.779594] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.011 [2024-11-20 06:40:03.780019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.011 [2024-11-20 06:40:03.780046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.011 [2024-11-20 06:40:03.780061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.011 [2024-11-20 06:40:03.780295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.011 [2024-11-20 06:40:03.780503] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.011 [2024-11-20 06:40:03.780523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.011 [2024-11-20 06:40:03.780535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.011 [2024-11-20 06:40:03.780546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.011 [2024-11-20 06:40:03.792605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.011 [2024-11-20 06:40:03.792969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.011 [2024-11-20 06:40:03.793010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.011 [2024-11-20 06:40:03.793025] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.011 [2024-11-20 06:40:03.793272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.011 [2024-11-20 06:40:03.793509] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.011 [2024-11-20 06:40:03.793529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.011 [2024-11-20 06:40:03.793542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.011 [2024-11-20 06:40:03.793553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.011 [2024-11-20 06:40:03.805683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.011 [2024-11-20 06:40:03.806015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.011 [2024-11-20 06:40:03.806042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.011 [2024-11-20 06:40:03.806058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.011 [2024-11-20 06:40:03.806281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.011 [2024-11-20 06:40:03.806520] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.012 [2024-11-20 06:40:03.806541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.012 [2024-11-20 06:40:03.806559] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.012 [2024-11-20 06:40:03.806572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.012 [2024-11-20 06:40:03.818733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.012 [2024-11-20 06:40:03.819045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.012 [2024-11-20 06:40:03.819085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.012 [2024-11-20 06:40:03.819100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.012 [2024-11-20 06:40:03.819299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.012 [2024-11-20 06:40:03.819523] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.012 [2024-11-20 06:40:03.819542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.012 [2024-11-20 06:40:03.819554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.012 [2024-11-20 06:40:03.819565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.012 [2024-11-20 06:40:03.831817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.012 [2024-11-20 06:40:03.832180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.012 [2024-11-20 06:40:03.832207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.012 [2024-11-20 06:40:03.832223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.012 [2024-11-20 06:40:03.832477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.012 [2024-11-20 06:40:03.832708] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.012 [2024-11-20 06:40:03.832727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.012 [2024-11-20 06:40:03.832739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.012 [2024-11-20 06:40:03.832750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.272 [2024-11-20 06:40:03.845328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.272 [2024-11-20 06:40:03.845689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.272 [2024-11-20 06:40:03.845715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.272 [2024-11-20 06:40:03.845729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.272 [2024-11-20 06:40:03.845929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.272 [2024-11-20 06:40:03.846154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.272 [2024-11-20 06:40:03.846173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.272 [2024-11-20 06:40:03.846185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.272 [2024-11-20 06:40:03.846196] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.272 [2024-11-20 06:40:03.858533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.272 [2024-11-20 06:40:03.858909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.272 [2024-11-20 06:40:03.858935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.272 [2024-11-20 06:40:03.858965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.272 [2024-11-20 06:40:03.859186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.272 [2024-11-20 06:40:03.859422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.272 [2024-11-20 06:40:03.859442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.272 [2024-11-20 06:40:03.859454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.272 [2024-11-20 06:40:03.859466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.272 [2024-11-20 06:40:03.871643] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.272 [2024-11-20 06:40:03.871969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.272 [2024-11-20 06:40:03.871996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.272 [2024-11-20 06:40:03.872012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.272 [2024-11-20 06:40:03.872232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.272 [2024-11-20 06:40:03.872473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.272 [2024-11-20 06:40:03.872494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.272 [2024-11-20 06:40:03.872506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.272 [2024-11-20 06:40:03.872518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.272 [2024-11-20 06:40:03.884629] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.272 [2024-11-20 06:40:03.884993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.272 [2024-11-20 06:40:03.885036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.272 [2024-11-20 06:40:03.885051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.272 [2024-11-20 06:40:03.885327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.272 [2024-11-20 06:40:03.885553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.272 [2024-11-20 06:40:03.885574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.272 [2024-11-20 06:40:03.885587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.272 [2024-11-20 06:40:03.885599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.272 [2024-11-20 06:40:03.897718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.272 [2024-11-20 06:40:03.898203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.272 [2024-11-20 06:40:03.898244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.272 [2024-11-20 06:40:03.898266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.272 [2024-11-20 06:40:03.898513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.272 [2024-11-20 06:40:03.898741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.272 [2024-11-20 06:40:03.898760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.272 [2024-11-20 06:40:03.898772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.272 [2024-11-20 06:40:03.898782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.272 [2024-11-20 06:40:03.910930] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.272 [2024-11-20 06:40:03.911417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.272 [2024-11-20 06:40:03.911444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.272 [2024-11-20 06:40:03.911475] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.272 [2024-11-20 06:40:03.911728] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.272 [2024-11-20 06:40:03.911920] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.272 [2024-11-20 06:40:03.911939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.272 [2024-11-20 06:40:03.911951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.273 [2024-11-20 06:40:03.911962] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.273 [2024-11-20 06:40:03.924089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.273 [2024-11-20 06:40:03.924639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.273 [2024-11-20 06:40:03.924683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.273 [2024-11-20 06:40:03.924698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.273 [2024-11-20 06:40:03.924965] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.273 [2024-11-20 06:40:03.925158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.273 [2024-11-20 06:40:03.925177] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.273 [2024-11-20 06:40:03.925188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.273 [2024-11-20 06:40:03.925200] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.273 [2024-11-20 06:40:03.937194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.273 [2024-11-20 06:40:03.937589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.273 [2024-11-20 06:40:03.937618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.273 [2024-11-20 06:40:03.937649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.273 [2024-11-20 06:40:03.937902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.273 [2024-11-20 06:40:03.938115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.273 [2024-11-20 06:40:03.938133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.273 [2024-11-20 06:40:03.938145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.273 [2024-11-20 06:40:03.938155] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.273 [2024-11-20 06:40:03.950288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.273 [2024-11-20 06:40:03.950666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.273 [2024-11-20 06:40:03.950693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.273 [2024-11-20 06:40:03.950708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.273 [2024-11-20 06:40:03.950923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.273 [2024-11-20 06:40:03.951131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.273 [2024-11-20 06:40:03.951150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.273 [2024-11-20 06:40:03.951162] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.273 [2024-11-20 06:40:03.951172] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.273 [2024-11-20 06:40:03.963377] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.273 [2024-11-20 06:40:03.963744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.273 [2024-11-20 06:40:03.963787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.273 [2024-11-20 06:40:03.963802] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.273 [2024-11-20 06:40:03.964054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.273 [2024-11-20 06:40:03.964262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.273 [2024-11-20 06:40:03.964281] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.273 [2024-11-20 06:40:03.964318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.273 [2024-11-20 06:40:03.964331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.273 [2024-11-20 06:40:03.976441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.273 [2024-11-20 06:40:03.976804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.273 [2024-11-20 06:40:03.976846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.273 [2024-11-20 06:40:03.976861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.273 [2024-11-20 06:40:03.977113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.273 [2024-11-20 06:40:03.977362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.273 [2024-11-20 06:40:03.977382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.273 [2024-11-20 06:40:03.977400] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.273 [2024-11-20 06:40:03.977413] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.273 [2024-11-20 06:40:03.989677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.273 [2024-11-20 06:40:03.989981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.273 [2024-11-20 06:40:03.990006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.273 [2024-11-20 06:40:03.990021] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.273 [2024-11-20 06:40:03.990215] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.273 [2024-11-20 06:40:03.990460] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.273 [2024-11-20 06:40:03.990481] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.273 [2024-11-20 06:40:03.990494] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.273 [2024-11-20 06:40:03.990507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.273 [2024-11-20 06:40:04.002865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.273 [2024-11-20 06:40:04.003227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.273 [2024-11-20 06:40:04.003269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.273 [2024-11-20 06:40:04.003284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.273 [2024-11-20 06:40:04.003557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.273 [2024-11-20 06:40:04.003770] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.273 [2024-11-20 06:40:04.003789] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.273 [2024-11-20 06:40:04.003800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.273 [2024-11-20 06:40:04.003811] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.273 [2024-11-20 06:40:04.016033] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.273 [2024-11-20 06:40:04.016457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.273 [2024-11-20 06:40:04.016499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.273 [2024-11-20 06:40:04.016516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.273 [2024-11-20 06:40:04.016755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.273 [2024-11-20 06:40:04.016962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.273 [2024-11-20 06:40:04.016981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.273 [2024-11-20 06:40:04.016993] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.273 [2024-11-20 06:40:04.017003] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.273 [2024-11-20 06:40:04.029275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.273 [2024-11-20 06:40:04.029701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.273 [2024-11-20 06:40:04.029745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.273 [2024-11-20 06:40:04.029761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.273 [2024-11-20 06:40:04.030005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.273 [2024-11-20 06:40:04.030198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.273 [2024-11-20 06:40:04.030218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.273 [2024-11-20 06:40:04.030229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.273 [2024-11-20 06:40:04.030241] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.273 [2024-11-20 06:40:04.042637] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.273 [2024-11-20 06:40:04.043015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.273 [2024-11-20 06:40:04.043043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.273 [2024-11-20 06:40:04.043058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.274 [2024-11-20 06:40:04.043292] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.274 [2024-11-20 06:40:04.043521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.274 [2024-11-20 06:40:04.043542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.274 [2024-11-20 06:40:04.043555] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.274 [2024-11-20 06:40:04.043567] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.274 [2024-11-20 06:40:04.055786] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.274 [2024-11-20 06:40:04.056154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.274 [2024-11-20 06:40:04.056198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.274 [2024-11-20 06:40:04.056214] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.274 [2024-11-20 06:40:04.056477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.274 [2024-11-20 06:40:04.056691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.274 [2024-11-20 06:40:04.056709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.274 [2024-11-20 06:40:04.056721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.274 [2024-11-20 06:40:04.056733] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.274 [2024-11-20 06:40:04.069009] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.274 [2024-11-20 06:40:04.069376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.274 [2024-11-20 06:40:04.069419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.274 [2024-11-20 06:40:04.069442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.274 [2024-11-20 06:40:04.069708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.274 [2024-11-20 06:40:04.069900] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.274 [2024-11-20 06:40:04.069918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.274 [2024-11-20 06:40:04.069930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.274 [2024-11-20 06:40:04.069941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.274 [2024-11-20 06:40:04.082061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.274 [2024-11-20 06:40:04.082487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.274 [2024-11-20 06:40:04.082530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.274 [2024-11-20 06:40:04.082546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.274 [2024-11-20 06:40:04.082786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.274 [2024-11-20 06:40:04.082993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.274 [2024-11-20 06:40:04.083012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.274 [2024-11-20 06:40:04.083023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.274 [2024-11-20 06:40:04.083034] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.274 [2024-11-20 06:40:04.095201] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.274 [2024-11-20 06:40:04.095591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.274 [2024-11-20 06:40:04.095619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.274 [2024-11-20 06:40:04.095634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.274 [2024-11-20 06:40:04.095852] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.274 [2024-11-20 06:40:04.096060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.274 [2024-11-20 06:40:04.096079] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.274 [2024-11-20 06:40:04.096091] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.274 [2024-11-20 06:40:04.096101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.534 [2024-11-20 06:40:04.108381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.534 [2024-11-20 06:40:04.108743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.534 [2024-11-20 06:40:04.108769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.534 [2024-11-20 06:40:04.108784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.534 [2024-11-20 06:40:04.108998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.534 [2024-11-20 06:40:04.109211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.534 [2024-11-20 06:40:04.109229] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.534 [2024-11-20 06:40:04.109241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.534 [2024-11-20 06:40:04.109252] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.534 [2024-11-20 06:40:04.121467] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.534 [2024-11-20 06:40:04.121859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.534 [2024-11-20 06:40:04.121886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.534 [2024-11-20 06:40:04.121902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.534 [2024-11-20 06:40:04.122122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.534 [2024-11-20 06:40:04.122373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.534 [2024-11-20 06:40:04.122393] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.534 [2024-11-20 06:40:04.122406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.534 [2024-11-20 06:40:04.122417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.534 [2024-11-20 06:40:04.134588] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.534 [2024-11-20 06:40:04.135083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.534 [2024-11-20 06:40:04.135125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.534 [2024-11-20 06:40:04.135141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.534 [2024-11-20 06:40:04.135420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.534 [2024-11-20 06:40:04.135625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.534 [2024-11-20 06:40:04.135645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.534 [2024-11-20 06:40:04.135657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.534 [2024-11-20 06:40:04.135669] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.534 [2024-11-20 06:40:04.147613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.534 [2024-11-20 06:40:04.148009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.534 [2024-11-20 06:40:04.148037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.534 [2024-11-20 06:40:04.148052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.534 [2024-11-20 06:40:04.148273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.534 [2024-11-20 06:40:04.148519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.534 [2024-11-20 06:40:04.148548] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.534 [2024-11-20 06:40:04.148566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.534 [2024-11-20 06:40:04.148578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.534 [2024-11-20 06:40:04.160655] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.534 [2024-11-20 06:40:04.161022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.534 [2024-11-20 06:40:04.161049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.534 [2024-11-20 06:40:04.161064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.534 [2024-11-20 06:40:04.161315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.534 [2024-11-20 06:40:04.161535] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.534 [2024-11-20 06:40:04.161555] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.534 [2024-11-20 06:40:04.161568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.534 [2024-11-20 06:40:04.161579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.534 [2024-11-20 06:40:04.173731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.534 [2024-11-20 06:40:04.174093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.534 [2024-11-20 06:40:04.174135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.534 [2024-11-20 06:40:04.174151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.534 [2024-11-20 06:40:04.174414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.534 [2024-11-20 06:40:04.174636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.534 [2024-11-20 06:40:04.174670] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.534 [2024-11-20 06:40:04.174681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.534 [2024-11-20 06:40:04.174693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.534 [2024-11-20 06:40:04.186923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.534 [2024-11-20 06:40:04.187283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.534 [2024-11-20 06:40:04.187333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.534 [2024-11-20 06:40:04.187351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.534 [2024-11-20 06:40:04.187591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.534 [2024-11-20 06:40:04.187816] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.534 [2024-11-20 06:40:04.187835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.534 [2024-11-20 06:40:04.187846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.534 [2024-11-20 06:40:04.187857] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.535 [2024-11-20 06:40:04.200007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.535 [2024-11-20 06:40:04.200501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.535 [2024-11-20 06:40:04.200544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.535 [2024-11-20 06:40:04.200560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.535 [2024-11-20 06:40:04.200810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.535 [2024-11-20 06:40:04.201019] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.535 [2024-11-20 06:40:04.201037] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.535 [2024-11-20 06:40:04.201049] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.535 [2024-11-20 06:40:04.201059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.535 [2024-11-20 06:40:04.213076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.535 [2024-11-20 06:40:04.213461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.535 [2024-11-20 06:40:04.213503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.535 [2024-11-20 06:40:04.213519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.535 [2024-11-20 06:40:04.213740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.535 [2024-11-20 06:40:04.213947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.535 [2024-11-20 06:40:04.213966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.535 [2024-11-20 06:40:04.213978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.535 [2024-11-20 06:40:04.213989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.535 [2024-11-20 06:40:04.226105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.535 [2024-11-20 06:40:04.226476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.535 [2024-11-20 06:40:04.226518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.535 [2024-11-20 06:40:04.226533] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.535 [2024-11-20 06:40:04.226779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.535 [2024-11-20 06:40:04.226987] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.535 [2024-11-20 06:40:04.227005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.535 [2024-11-20 06:40:04.227017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.535 [2024-11-20 06:40:04.227028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.535 4258.40 IOPS, 16.63 MiB/s [2024-11-20T05:40:04.371Z] [2024-11-20 06:40:04.239275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.535 [2024-11-20 06:40:04.239718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.535 [2024-11-20 06:40:04.239747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.535 [2024-11-20 06:40:04.239768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.535 [2024-11-20 06:40:04.240010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.535 [2024-11-20 06:40:04.240215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.535 [2024-11-20 06:40:04.240235] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.535 [2024-11-20 06:40:04.240247] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.535 [2024-11-20 06:40:04.240259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.535 [2024-11-20 06:40:04.252685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.535 [2024-11-20 06:40:04.253009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.535 [2024-11-20 06:40:04.253051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.535 [2024-11-20 06:40:04.253067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.535 [2024-11-20 06:40:04.253288] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.535 [2024-11-20 06:40:04.253532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.535 [2024-11-20 06:40:04.253552] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.535 [2024-11-20 06:40:04.253564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.535 [2024-11-20 06:40:04.253576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.535 [2024-11-20 06:40:04.265795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.535 [2024-11-20 06:40:04.266154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.535 [2024-11-20 06:40:04.266181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.535 [2024-11-20 06:40:04.266196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.535 [2024-11-20 06:40:04.266463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.535 [2024-11-20 06:40:04.266678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.535 [2024-11-20 06:40:04.266696] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.535 [2024-11-20 06:40:04.266708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.535 [2024-11-20 06:40:04.266719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.535 [2024-11-20 06:40:04.278841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.535 [2024-11-20 06:40:04.279262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.535 [2024-11-20 06:40:04.279289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.535 [2024-11-20 06:40:04.279328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.535 [2024-11-20 06:40:04.279571] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.535 [2024-11-20 06:40:04.279786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.535 [2024-11-20 06:40:04.279805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.535 [2024-11-20 06:40:04.279816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.535 [2024-11-20 06:40:04.279827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.535 [2024-11-20 06:40:04.292112] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.535 [2024-11-20 06:40:04.292534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.535 [2024-11-20 06:40:04.292561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.535 [2024-11-20 06:40:04.292576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.535 [2024-11-20 06:40:04.292806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.535 [2024-11-20 06:40:04.293013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.535 [2024-11-20 06:40:04.293031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.535 [2024-11-20 06:40:04.293043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.535 [2024-11-20 06:40:04.293054] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.535 [2024-11-20 06:40:04.305309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.535 [2024-11-20 06:40:04.305645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.535 [2024-11-20 06:40:04.305725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.535 [2024-11-20 06:40:04.305741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.535 [2024-11-20 06:40:04.305977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.535 [2024-11-20 06:40:04.306184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.535 [2024-11-20 06:40:04.306202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.535 [2024-11-20 06:40:04.306214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.535 [2024-11-20 06:40:04.306225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.535 [2024-11-20 06:40:04.318412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.535 [2024-11-20 06:40:04.318783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.535 [2024-11-20 06:40:04.318809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.535 [2024-11-20 06:40:04.318824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.535 [2024-11-20 06:40:04.319038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.535 [2024-11-20 06:40:04.319247] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.535 [2024-11-20 06:40:04.319266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.535 [2024-11-20 06:40:04.319284] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.536 [2024-11-20 06:40:04.319297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.536 [2024-11-20 06:40:04.331679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.536 [2024-11-20 06:40:04.332125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.536 [2024-11-20 06:40:04.332168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.536 [2024-11-20 06:40:04.332183] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.536 [2024-11-20 06:40:04.332450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.536 [2024-11-20 06:40:04.332684] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.536 [2024-11-20 06:40:04.332703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.536 [2024-11-20 06:40:04.332715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.536 [2024-11-20 06:40:04.332725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.536 [2024-11-20 06:40:04.344892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.536 [2024-11-20 06:40:04.345327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.536 [2024-11-20 06:40:04.345367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.536 [2024-11-20 06:40:04.345382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.536 [2024-11-20 06:40:04.345623] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.536 [2024-11-20 06:40:04.345815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.536 [2024-11-20 06:40:04.345834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.536 [2024-11-20 06:40:04.345846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.536 [2024-11-20 06:40:04.345856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.536 [2024-11-20 06:40:04.358391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.536 [2024-11-20 06:40:04.358810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.536 [2024-11-20 06:40:04.358862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.536 [2024-11-20 06:40:04.358879] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.536 [2024-11-20 06:40:04.359118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.536 [2024-11-20 06:40:04.359358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.536 [2024-11-20 06:40:04.359380] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.536 [2024-11-20 06:40:04.359392] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.536 [2024-11-20 06:40:04.359405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.795 [2024-11-20 06:40:04.372060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.795 [2024-11-20 06:40:04.372455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.795 [2024-11-20 06:40:04.372485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.795 [2024-11-20 06:40:04.372501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.795 [2024-11-20 06:40:04.372716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.795 [2024-11-20 06:40:04.372936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.795 [2024-11-20 06:40:04.372957] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.795 [2024-11-20 06:40:04.372971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.795 [2024-11-20 06:40:04.372984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.795 [2024-11-20 06:40:04.385380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.795 [2024-11-20 06:40:04.385913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.795 [2024-11-20 06:40:04.385963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.795 [2024-11-20 06:40:04.385982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.795 [2024-11-20 06:40:04.386250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.796 [2024-11-20 06:40:04.386496] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.796 [2024-11-20 06:40:04.386518] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.796 [2024-11-20 06:40:04.386531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.796 [2024-11-20 06:40:04.386544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.796 [2024-11-20 06:40:04.398730] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.796 [2024-11-20 06:40:04.399157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.796 [2024-11-20 06:40:04.399185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.796 [2024-11-20 06:40:04.399216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.796 [2024-11-20 06:40:04.399440] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.796 [2024-11-20 06:40:04.399683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.796 [2024-11-20 06:40:04.399703] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.796 [2024-11-20 06:40:04.399715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.796 [2024-11-20 06:40:04.399725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.796 [2024-11-20 06:40:04.412036] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.796 [2024-11-20 06:40:04.412394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.796 [2024-11-20 06:40:04.412444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.796 [2024-11-20 06:40:04.412487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.796 [2024-11-20 06:40:04.412722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.796 [2024-11-20 06:40:04.412915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.796 [2024-11-20 06:40:04.412933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.796 [2024-11-20 06:40:04.412945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.796 [2024-11-20 06:40:04.412956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.796 [2024-11-20 06:40:04.425266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.796 [2024-11-20 06:40:04.425677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.796 [2024-11-20 06:40:04.425725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.796 [2024-11-20 06:40:04.425741] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.796 [2024-11-20 06:40:04.425987] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.796 [2024-11-20 06:40:04.426180] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.796 [2024-11-20 06:40:04.426198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.796 [2024-11-20 06:40:04.426210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.796 [2024-11-20 06:40:04.426221] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.796 [2024-11-20 06:40:04.438464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.796 [2024-11-20 06:40:04.438855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.796 [2024-11-20 06:40:04.438903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.796 [2024-11-20 06:40:04.438918] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.796 [2024-11-20 06:40:04.439186] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.796 [2024-11-20 06:40:04.439422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.796 [2024-11-20 06:40:04.439442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.796 [2024-11-20 06:40:04.439454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.796 [2024-11-20 06:40:04.439466] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.796 [2024-11-20 06:40:04.451731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.796 [2024-11-20 06:40:04.452159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.796 [2024-11-20 06:40:04.452186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.796 [2024-11-20 06:40:04.452202] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.796 [2024-11-20 06:40:04.452451] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.796 [2024-11-20 06:40:04.452678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.796 [2024-11-20 06:40:04.452697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.796 [2024-11-20 06:40:04.452709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.796 [2024-11-20 06:40:04.452719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.796 [2024-11-20 06:40:04.464997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.796 [2024-11-20 06:40:04.465437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.796 [2024-11-20 06:40:04.465466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.796 [2024-11-20 06:40:04.465482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.796 [2024-11-20 06:40:04.465721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.796 [2024-11-20 06:40:04.465929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.796 [2024-11-20 06:40:04.465948] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.796 [2024-11-20 06:40:04.465959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.796 [2024-11-20 06:40:04.465970] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.796 [2024-11-20 06:40:04.478217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.796 [2024-11-20 06:40:04.478735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.796 [2024-11-20 06:40:04.478777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.796 [2024-11-20 06:40:04.478793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.796 [2024-11-20 06:40:04.479042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.796 [2024-11-20 06:40:04.479250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.796 [2024-11-20 06:40:04.479269] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.796 [2024-11-20 06:40:04.479280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.796 [2024-11-20 06:40:04.479316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.796 [2024-11-20 06:40:04.491197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.796 [2024-11-20 06:40:04.491567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.796 [2024-11-20 06:40:04.491595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.796 [2024-11-20 06:40:04.491611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.796 [2024-11-20 06:40:04.491840] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.796 [2024-11-20 06:40:04.492072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.796 [2024-11-20 06:40:04.492092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.796 [2024-11-20 06:40:04.492104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.796 [2024-11-20 06:40:04.492121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.796 [2024-11-20 06:40:04.504488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.796 [2024-11-20 06:40:04.504876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.796 [2024-11-20 06:40:04.504904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.796 [2024-11-20 06:40:04.504919] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.796 [2024-11-20 06:40:04.505159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.796 [2024-11-20 06:40:04.505412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.796 [2024-11-20 06:40:04.505433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.796 [2024-11-20 06:40:04.505445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.796 [2024-11-20 06:40:04.505457] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.796 [2024-11-20 06:40:04.517728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.796 [2024-11-20 06:40:04.518090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.796 [2024-11-20 06:40:04.518117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.796 [2024-11-20 06:40:04.518132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.797 [2024-11-20 06:40:04.518378] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.797 [2024-11-20 06:40:04.518594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.797 [2024-11-20 06:40:04.518614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.797 [2024-11-20 06:40:04.518626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.797 [2024-11-20 06:40:04.518636] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.797 [2024-11-20 06:40:04.530728] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.797 [2024-11-20 06:40:04.531218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.797 [2024-11-20 06:40:04.531260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.797 [2024-11-20 06:40:04.531276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.797 [2024-11-20 06:40:04.531513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.797 [2024-11-20 06:40:04.531742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.797 [2024-11-20 06:40:04.531761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.797 [2024-11-20 06:40:04.531773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.797 [2024-11-20 06:40:04.531784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.797 [2024-11-20 06:40:04.543704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.797 [2024-11-20 06:40:04.544077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.797 [2024-11-20 06:40:04.544119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.797 [2024-11-20 06:40:04.544134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.797 [2024-11-20 06:40:04.544414] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.797 [2024-11-20 06:40:04.544614] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.797 [2024-11-20 06:40:04.544632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.797 [2024-11-20 06:40:04.544645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.797 [2024-11-20 06:40:04.544656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.797 [2024-11-20 06:40:04.556791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.797 [2024-11-20 06:40:04.557191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.797 [2024-11-20 06:40:04.557228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.797 [2024-11-20 06:40:04.557260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.797 [2024-11-20 06:40:04.557519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.797 [2024-11-20 06:40:04.557731] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.797 [2024-11-20 06:40:04.557749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.797 [2024-11-20 06:40:04.557761] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.797 [2024-11-20 06:40:04.557772] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.797 [2024-11-20 06:40:04.569873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.797 [2024-11-20 06:40:04.570211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.797 [2024-11-20 06:40:04.570248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.797 [2024-11-20 06:40:04.570281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.797 [2024-11-20 06:40:04.570548] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.797 [2024-11-20 06:40:04.570759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.797 [2024-11-20 06:40:04.570778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.797 [2024-11-20 06:40:04.570790] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.797 [2024-11-20 06:40:04.570801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.797 [2024-11-20 06:40:04.582907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.797 [2024-11-20 06:40:04.583344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.797 [2024-11-20 06:40:04.583387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.797 [2024-11-20 06:40:04.583407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.797 [2024-11-20 06:40:04.583668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.797 [2024-11-20 06:40:04.583861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.797 [2024-11-20 06:40:04.583879] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.797 [2024-11-20 06:40:04.583891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.797 [2024-11-20 06:40:04.583902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.797 [2024-11-20 06:40:04.596037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.797 [2024-11-20 06:40:04.596527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.797 [2024-11-20 06:40:04.596570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.797 [2024-11-20 06:40:04.596587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.797 [2024-11-20 06:40:04.596837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.797 [2024-11-20 06:40:04.597046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.797 [2024-11-20 06:40:04.597065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.797 [2024-11-20 06:40:04.597076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.797 [2024-11-20 06:40:04.597087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.797 [2024-11-20 06:40:04.609076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.797 [2024-11-20 06:40:04.609412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.797 [2024-11-20 06:40:04.609440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.797 [2024-11-20 06:40:04.609455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.797 [2024-11-20 06:40:04.609676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.797 [2024-11-20 06:40:04.609885] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.797 [2024-11-20 06:40:04.609903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.797 [2024-11-20 06:40:04.609915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.797 [2024-11-20 06:40:04.609926] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:32.797 [2024-11-20 06:40:04.622083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:32.797 [2024-11-20 06:40:04.622452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:32.797 [2024-11-20 06:40:04.622495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:32.797 [2024-11-20 06:40:04.622511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:32.797 [2024-11-20 06:40:04.622779] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:32.797 [2024-11-20 06:40:04.622972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:32.797 [2024-11-20 06:40:04.622995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:32.797 [2024-11-20 06:40:04.623007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:32.797 [2024-11-20 06:40:04.623018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.058 [2024-11-20 06:40:04.635639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.058 [2024-11-20 06:40:04.636031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.058 [2024-11-20 06:40:04.636059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.058 [2024-11-20 06:40:04.636074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.058 [2024-11-20 06:40:04.636296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.058 [2024-11-20 06:40:04.636522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.058 [2024-11-20 06:40:04.636541] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.058 [2024-11-20 06:40:04.636554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.058 [2024-11-20 06:40:04.636565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.058 [2024-11-20 06:40:04.648663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.058 [2024-11-20 06:40:04.649027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.058 [2024-11-20 06:40:04.649055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.058 [2024-11-20 06:40:04.649086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.058 [2024-11-20 06:40:04.649340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.058 [2024-11-20 06:40:04.649553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.058 [2024-11-20 06:40:04.649573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.058 [2024-11-20 06:40:04.649585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.058 [2024-11-20 06:40:04.649596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.058 [2024-11-20 06:40:04.661725] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.058 [2024-11-20 06:40:04.662108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.058 [2024-11-20 06:40:04.662136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.058 [2024-11-20 06:40:04.662152] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.058 [2024-11-20 06:40:04.662404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.058 [2024-11-20 06:40:04.662633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.058 [2024-11-20 06:40:04.662651] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.058 [2024-11-20 06:40:04.662663] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.058 [2024-11-20 06:40:04.662679] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.058 [2024-11-20 06:40:04.674829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.058 [2024-11-20 06:40:04.675386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.058 [2024-11-20 06:40:04.675415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.058 [2024-11-20 06:40:04.675431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.058 [2024-11-20 06:40:04.675667] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.058 [2024-11-20 06:40:04.675876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.058 [2024-11-20 06:40:04.675895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.058 [2024-11-20 06:40:04.675907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.058 [2024-11-20 06:40:04.675919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.058 [2024-11-20 06:40:04.687975] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.058 [2024-11-20 06:40:04.688336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.058 [2024-11-20 06:40:04.688364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.058 [2024-11-20 06:40:04.688381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.058 [2024-11-20 06:40:04.688622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.058 [2024-11-20 06:40:04.688831] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.058 [2024-11-20 06:40:04.688850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.058 [2024-11-20 06:40:04.688862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.058 [2024-11-20 06:40:04.688873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2202190 Killed "${NVMF_APP[@]}" "$@" 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=2203154 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 2203154 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # '[' -z 2203154 ']' 00:29:33.058 [2024-11-20 06:40:04.701501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.058 [2024-11-20 06:40:04.701892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.058 [2024-11-20 06:40:04.701921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:33.058 [2024-11-20 06:40:04.701938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.058 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.058 [2024-11-20 06:40:04.702152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.058 [2024-11-20 06:40:04.702415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.058 [2024-11-20 06:40:04.702438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.058 [2024-11-20 06:40:04.702451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.058 [2024-11-20 06:40:04.702464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.058 [2024-11-20 06:40:04.714907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.058 [2024-11-20 06:40:04.715246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.058 [2024-11-20 06:40:04.715275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.059 [2024-11-20 06:40:04.715291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.059 [2024-11-20 06:40:04.715512] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.059 [2024-11-20 06:40:04.715755] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.059 [2024-11-20 06:40:04.715775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.059 [2024-11-20 06:40:04.715787] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.059 [2024-11-20 06:40:04.715798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.059 [2024-11-20 06:40:04.728277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.059 [2024-11-20 06:40:04.728697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.059 [2024-11-20 06:40:04.728739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.059 [2024-11-20 06:40:04.728756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.059 [2024-11-20 06:40:04.728990] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.059 [2024-11-20 06:40:04.729189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.059 [2024-11-20 06:40:04.729209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.059 [2024-11-20 06:40:04.729221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.059 [2024-11-20 06:40:04.729232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.059 [2024-11-20 06:40:04.741890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.059 [2024-11-20 06:40:04.742269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.059 [2024-11-20 06:40:04.742310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.059 [2024-11-20 06:40:04.742329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.059 [2024-11-20 06:40:04.742543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.059 [2024-11-20 06:40:04.742807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.059 [2024-11-20 06:40:04.742828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.059 [2024-11-20 06:40:04.742841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.059 [2024-11-20 06:40:04.742862] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.059 [2024-11-20 06:40:04.748487] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:33.059 [2024-11-20 06:40:04.748548] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:33.059 [2024-11-20 06:40:04.755331] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.059 [2024-11-20 06:40:04.755751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.059 [2024-11-20 06:40:04.755794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.059 [2024-11-20 06:40:04.755811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.059 [2024-11-20 06:40:04.756037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.059 [2024-11-20 06:40:04.756251] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.059 [2024-11-20 06:40:04.756270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.059 [2024-11-20 06:40:04.756282] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.059 [2024-11-20 06:40:04.756326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.059 [2024-11-20 06:40:04.768723] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.059 [2024-11-20 06:40:04.769102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.059 [2024-11-20 06:40:04.769146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.059 [2024-11-20 06:40:04.769162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.059 [2024-11-20 06:40:04.769416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.059 [2024-11-20 06:40:04.769658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.059 [2024-11-20 06:40:04.769678] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.059 [2024-11-20 06:40:04.769690] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.059 [2024-11-20 06:40:04.769701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.059 [2024-11-20 06:40:04.782016] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.059 [2024-11-20 06:40:04.782454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.059 [2024-11-20 06:40:04.782483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.059 [2024-11-20 06:40:04.782499] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.059 [2024-11-20 06:40:04.782741] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.059 [2024-11-20 06:40:04.782939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.059 [2024-11-20 06:40:04.782959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.059 [2024-11-20 06:40:04.782971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.059 [2024-11-20 06:40:04.782982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.059 [2024-11-20 06:40:04.795501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.059 [2024-11-20 06:40:04.795850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.059 [2024-11-20 06:40:04.795892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.059 [2024-11-20 06:40:04.795907] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.059 [2024-11-20 06:40:04.796128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.059 [2024-11-20 06:40:04.796375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.059 [2024-11-20 06:40:04.796397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.059 [2024-11-20 06:40:04.796410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.059 [2024-11-20 06:40:04.796423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.059 [2024-11-20 06:40:04.808821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.059 [2024-11-20 06:40:04.809191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.059 [2024-11-20 06:40:04.809219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.059 [2024-11-20 06:40:04.809235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.059 [2024-11-20 06:40:04.809472] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.059 [2024-11-20 06:40:04.809713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.059 [2024-11-20 06:40:04.809732] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.059 [2024-11-20 06:40:04.809744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.059 [2024-11-20 06:40:04.809755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.059 [2024-11-20 06:40:04.822192] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.059 [2024-11-20 06:40:04.822556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.059 [2024-11-20 06:40:04.822585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.059 [2024-11-20 06:40:04.822606] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.059 [2024-11-20 06:40:04.822844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.059 [2024-11-20 06:40:04.823057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.059 [2024-11-20 06:40:04.823076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.059 [2024-11-20 06:40:04.823088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.059 [2024-11-20 06:40:04.823100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.059 [2024-11-20 06:40:04.823177] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:33.059 [2024-11-20 06:40:04.835494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.059 [2024-11-20 06:40:04.836173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.059 [2024-11-20 06:40:04.836225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.059 [2024-11-20 06:40:04.836245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.059 [2024-11-20 06:40:04.836516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.059 [2024-11-20 06:40:04.836777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.059 [2024-11-20 06:40:04.836798] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.060 [2024-11-20 06:40:04.836813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.060 [2024-11-20 06:40:04.836827] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.060 [2024-11-20 06:40:04.848907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.060 [2024-11-20 06:40:04.849299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.060 [2024-11-20 06:40:04.849336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.060 [2024-11-20 06:40:04.849352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.060 [2024-11-20 06:40:04.849582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.060 [2024-11-20 06:40:04.849799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.060 [2024-11-20 06:40:04.849818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.060 [2024-11-20 06:40:04.849831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.060 [2024-11-20 06:40:04.849843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.060 [2024-11-20 06:40:04.862185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.060 [2024-11-20 06:40:04.862582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.060 [2024-11-20 06:40:04.862611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.060 [2024-11-20 06:40:04.862627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.060 [2024-11-20 06:40:04.862880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.060 [2024-11-20 06:40:04.863079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.060 [2024-11-20 06:40:04.863099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.060 [2024-11-20 06:40:04.863111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.060 [2024-11-20 06:40:04.863123] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.060 [2024-11-20 06:40:04.875373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.060 [2024-11-20 06:40:04.875772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.060 [2024-11-20 06:40:04.875814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.060 [2024-11-20 06:40:04.875830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.060 [2024-11-20 06:40:04.876103] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.060 [2024-11-20 06:40:04.876329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.060 [2024-11-20 06:40:04.876382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.060 [2024-11-20 06:40:04.876396] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.060 [2024-11-20 06:40:04.876409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.060 [2024-11-20 06:40:04.880556] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:33.060 [2024-11-20 06:40:04.880587] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:33.060 [2024-11-20 06:40:04.880601] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:33.060 [2024-11-20 06:40:04.880612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:33.060 [2024-11-20 06:40:04.880621] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:33.060 [2024-11-20 06:40:04.882009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:33.060 [2024-11-20 06:40:04.882075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:33.060 [2024-11-20 06:40:04.882078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.060 [2024-11-20 06:40:04.889066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.060 [2024-11-20 06:40:04.889501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.060 [2024-11-20 06:40:04.889536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.060 [2024-11-20 06:40:04.889555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.060 [2024-11-20 06:40:04.889775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.060 [2024-11-20 06:40:04.889998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.060 [2024-11-20 06:40:04.890019] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.060 [2024-11-20 06:40:04.890034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.060 [2024-11-20 06:40:04.890049] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.319 [2024-11-20 06:40:04.902618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.319 [2024-11-20 06:40:04.903102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-11-20 06:40:04.903139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.319 [2024-11-20 06:40:04.903158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.319 [2024-11-20 06:40:04.903390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.319 [2024-11-20 06:40:04.903622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.319 [2024-11-20 06:40:04.903644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.319 [2024-11-20 06:40:04.903659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.319 [2024-11-20 06:40:04.903675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.319 [2024-11-20 06:40:04.916222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.319 [2024-11-20 06:40:04.916726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-11-20 06:40:04.916766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.319 [2024-11-20 06:40:04.916785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.319 [2024-11-20 06:40:04.917024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.319 [2024-11-20 06:40:04.917240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.319 [2024-11-20 06:40:04.917261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.319 [2024-11-20 06:40:04.917277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.319 [2024-11-20 06:40:04.917292] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.319 [2024-11-20 06:40:04.929690] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.319 [2024-11-20 06:40:04.930165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-11-20 06:40:04.930203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.319 [2024-11-20 06:40:04.930222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.319 [2024-11-20 06:40:04.930454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.319 [2024-11-20 06:40:04.930691] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.319 [2024-11-20 06:40:04.930712] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.319 [2024-11-20 06:40:04.930729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.319 [2024-11-20 06:40:04.930745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.319 [2024-11-20 06:40:04.943284] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.319 [2024-11-20 06:40:04.943738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-11-20 06:40:04.943773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.319 [2024-11-20 06:40:04.943801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.319 [2024-11-20 06:40:04.944039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.319 [2024-11-20 06:40:04.944254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.319 [2024-11-20 06:40:04.944275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.319 [2024-11-20 06:40:04.944312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.319 [2024-11-20 06:40:04.944330] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.319 [2024-11-20 06:40:04.956927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.319 [2024-11-20 06:40:04.957440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-11-20 06:40:04.957479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.319 [2024-11-20 06:40:04.957497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.319 [2024-11-20 06:40:04.957735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.319 [2024-11-20 06:40:04.957951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.319 [2024-11-20 06:40:04.957972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.319 [2024-11-20 06:40:04.957987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.319 [2024-11-20 06:40:04.958002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.319 [2024-11-20 06:40:04.970437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.319 [2024-11-20 06:40:04.970759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-11-20 06:40:04.970791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.319 [2024-11-20 06:40:04.970808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.319 [2024-11-20 06:40:04.971023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.319 [2024-11-20 06:40:04.971250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.319 [2024-11-20 06:40:04.971271] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.319 [2024-11-20 06:40:04.971285] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.319 [2024-11-20 06:40:04.971297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.319 [2024-11-20 06:40:04.983885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.319 [2024-11-20 06:40:04.984232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.319 [2024-11-20 06:40:04.984260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.319 [2024-11-20 06:40:04.984277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.319 [2024-11-20 06:40:04.984499] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.319 [2024-11-20 06:40:04.984727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.319 [2024-11-20 06:40:04.984749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.319 [2024-11-20 06:40:04.984763] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.319 [2024-11-20 06:40:04.984776] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.320 [2024-11-20 06:40:04.997395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.320 [2024-11-20 06:40:04.997749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-11-20 06:40:04.997777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.320 [2024-11-20 06:40:04.997793] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.320 [2024-11-20 06:40:04.998007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.320 [2024-11-20 06:40:04.998225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.320 [2024-11-20 06:40:04.998247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.320 [2024-11-20 06:40:04.998260] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.320 [2024-11-20 06:40:04.998273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.320 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:33.320 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@866 -- # return 0 00:29:33.320 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:33.320 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:33.320 06:40:04 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.320 [2024-11-20 06:40:05.011021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.320 [2024-11-20 06:40:05.011388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-11-20 06:40:05.011417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.320 [2024-11-20 06:40:05.011434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.320 [2024-11-20 06:40:05.011647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.320 [2024-11-20 06:40:05.011873] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.320 [2024-11-20 06:40:05.011895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.320 [2024-11-20 06:40:05.011909] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.320 [2024-11-20 06:40:05.011921] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.320 [2024-11-20 06:40:05.022053] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.320 [2024-11-20 06:40:05.024628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.320 [2024-11-20 06:40:05.024964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-11-20 06:40:05.024993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.320 [2024-11-20 06:40:05.025009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.320 [2024-11-20 06:40:05.025249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.320 [2024-11-20 06:40:05.025501] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.320 [2024-11-20 06:40:05.025523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.320 [2024-11-20 06:40:05.025537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.320 [2024-11-20 06:40:05.025550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.320 [2024-11-20 06:40:05.038183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.320 [2024-11-20 06:40:05.038634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-11-20 06:40:05.038677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.320 [2024-11-20 06:40:05.038694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.320 [2024-11-20 06:40:05.038927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.320 [2024-11-20 06:40:05.039150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.320 [2024-11-20 06:40:05.039171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.320 [2024-11-20 06:40:05.039185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.320 [2024-11-20 06:40:05.039198] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.320 [2024-11-20 06:40:05.051785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.320 [2024-11-20 06:40:05.052123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-11-20 06:40:05.052151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.320 [2024-11-20 06:40:05.052167] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.320 [2024-11-20 06:40:05.052391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.320 [2024-11-20 06:40:05.052619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.320 [2024-11-20 06:40:05.052639] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.320 [2024-11-20 06:40:05.052653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.320 [2024-11-20 06:40:05.052672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.320 [2024-11-20 06:40:05.065274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.320 [2024-11-20 06:40:05.065765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-11-20 06:40:05.065799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.320 [2024-11-20 06:40:05.065818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.320 Malloc0 00:29:33.320 [2024-11-20 06:40:05.066038] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.320 [2024-11-20 06:40:05.066259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.320 [2024-11-20 06:40:05.066280] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.320 [2024-11-20 06:40:05.066314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.320 [2024-11-20 06:40:05.066333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.320 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.320 [2024-11-20 06:40:05.078783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.320 [2024-11-20 06:40:05.079140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:33.320 [2024-11-20 06:40:05.079169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdeea40 with addr=10.0.0.2, port=4420 00:29:33.321 [2024-11-20 06:40:05.079185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeea40 is same with the state(6) to be set 00:29:33.321 [2024-11-20 06:40:05.079409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdeea40 (9): Bad file descriptor 00:29:33.321 [2024-11-20 06:40:05.079643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:33.321 [2024-11-20 06:40:05.079664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:33.321 [2024-11-20 06:40:05.079677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:33.321 [2024-11-20 06:40:05.079689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:33.321 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.321 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:33.321 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:33.321 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:33.321 [2024-11-20 06:40:05.085757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:33.321 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:33.321 06:40:05 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2202476 00:29:33.321 [2024-11-20 06:40:05.092380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:33.583 3548.67 IOPS, 13.86 MiB/s [2024-11-20T05:40:05.419Z] [2024-11-20 06:40:05.247021] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:29:35.449 4214.29 IOPS, 16.46 MiB/s [2024-11-20T05:40:08.287Z] 4750.12 IOPS, 18.56 MiB/s [2024-11-20T05:40:09.659Z] 5153.67 IOPS, 20.13 MiB/s [2024-11-20T05:40:10.592Z] 5484.70 IOPS, 21.42 MiB/s [2024-11-20T05:40:11.524Z] 5747.91 IOPS, 22.45 MiB/s [2024-11-20T05:40:12.458Z] 5980.58 IOPS, 23.36 MiB/s [2024-11-20T05:40:13.392Z] 6168.77 IOPS, 24.10 MiB/s [2024-11-20T05:40:14.326Z] 6334.36 IOPS, 24.74 MiB/s [2024-11-20T05:40:14.326Z] 6477.13 IOPS, 25.30 MiB/s 00:29:42.490 Latency(us) 00:29:42.490 [2024-11-20T05:40:14.326Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:42.490 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:42.490 Verification LBA range: start 0x0 length 0x4000 00:29:42.490 Nvme1n1 : 15.05 6459.38 25.23 10350.90 0.00 7572.01 970.90 45049.93 00:29:42.490 [2024-11-20T05:40:14.326Z] =================================================================================================================== 00:29:42.490 [2024-11-20T05:40:14.326Z] Total : 6459.38 25.23 10350.90 0.00 7572.01 970.90 45049.93 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:42.748 rmmod nvme_tcp 00:29:42.748 rmmod nvme_fabrics 00:29:42.748 rmmod nvme_keyring 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 2203154 ']' 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 2203154 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # '[' -z 2203154 ']' 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # kill -0 2203154 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # uname 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:42.748 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2203154 00:29:43.006 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:29:43.006 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:29:43.006 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2203154' 00:29:43.006 killing process with pid 2203154 00:29:43.006 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@971 -- # kill 2203154 00:29:43.006 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@976 -- # wait 2203154 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:43.265 06:40:14 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.168 06:40:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:45.168 00:29:45.168 real 0m22.799s 00:29:45.168 user 1m1.126s 00:29:45.168 sys 0m4.230s 00:29:45.168 06:40:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:45.168 06:40:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:45.168 ************************************ 00:29:45.168 END TEST nvmf_bdevperf 00:29:45.168 ************************************ 00:29:45.168 06:40:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:45.168 06:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:29:45.168 06:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:45.168 06:40:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:45.168 ************************************ 00:29:45.168 START TEST nvmf_target_disconnect 00:29:45.168 ************************************ 00:29:45.168 06:40:16 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:29:45.427 * Looking for test storage... 00:29:45.427 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lcov --version 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:29:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.427 --rc genhtml_branch_coverage=1 00:29:45.427 --rc genhtml_function_coverage=1 00:29:45.427 --rc genhtml_legend=1 00:29:45.427 --rc geninfo_all_blocks=1 00:29:45.427 --rc geninfo_unexecuted_blocks=1 00:29:45.427 00:29:45.427 ' 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:29:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.427 --rc genhtml_branch_coverage=1 00:29:45.427 --rc genhtml_function_coverage=1 00:29:45.427 --rc genhtml_legend=1 00:29:45.427 --rc geninfo_all_blocks=1 00:29:45.427 --rc geninfo_unexecuted_blocks=1 00:29:45.427 00:29:45.427 ' 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:29:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.427 --rc genhtml_branch_coverage=1 00:29:45.427 --rc genhtml_function_coverage=1 00:29:45.427 --rc genhtml_legend=1 00:29:45.427 --rc geninfo_all_blocks=1 00:29:45.427 --rc geninfo_unexecuted_blocks=1 00:29:45.427 00:29:45.427 ' 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:29:45.427 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:45.427 --rc genhtml_branch_coverage=1 00:29:45.427 --rc genhtml_function_coverage=1 00:29:45.427 --rc genhtml_legend=1 00:29:45.427 --rc geninfo_all_blocks=1 00:29:45.427 --rc geninfo_unexecuted_blocks=1 00:29:45.427 00:29:45.427 ' 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:45.427 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:45.428 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:29:45.428 06:40:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:29:47.964 Found 0000:09:00.0 (0x8086 - 0x159b) 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.964 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:29:47.965 Found 0000:09:00.1 (0x8086 - 0x159b) 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:29:47.965 Found net devices under 0000:09:00.0: cvl_0_0 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:29:47.965 Found net devices under 0000:09:00.1: cvl_0_1 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:47.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:47.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:29:47.965 00:29:47.965 --- 10.0.0.2 ping statistics --- 00:29:47.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.965 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:47.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:47.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:29:47.965 00:29:47.965 --- 10.0.0.1 ping statistics --- 00:29:47.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:47.965 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:47.965 ************************************ 00:29:47.965 START TEST nvmf_target_disconnect_tc1 00:29:47.965 ************************************ 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc1 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:29:47.965 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:47.965 [2024-11-20 06:40:19.579731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:47.965 [2024-11-20 06:40:19.579798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x838f40 with addr=10.0.0.2, port=4420 00:29:47.965 [2024-11-20 06:40:19.579833] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:29:47.965 [2024-11-20 06:40:19.579853] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:29:47.965 [2024-11-20 06:40:19.579866] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:29:47.965 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:29:47.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:29:47.966 Initializing NVMe Controllers 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:29:47.966 00:29:47.966 real 0m0.094s 00:29:47.966 user 0m0.043s 00:29:47.966 sys 0m0.051s 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:47.966 ************************************ 00:29:47.966 END TEST nvmf_target_disconnect_tc1 00:29:47.966 ************************************ 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1109 -- # xtrace_disable 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:29:47.966 ************************************ 00:29:47.966 START TEST nvmf_target_disconnect_tc2 00:29:47.966 ************************************ 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1127 -- # nvmf_target_disconnect_tc2 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2206318 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2206318 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2206318 ']' 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:47.966 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:47.966 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:47.966 [2024-11-20 06:40:19.697148] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:47.966 [2024-11-20 06:40:19.697229] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:47.966 [2024-11-20 06:40:19.770313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:48.224 [2024-11-20 06:40:19.831335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:48.224 [2024-11-20 06:40:19.831387] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:48.224 [2024-11-20 06:40:19.831417] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:48.224 [2024-11-20 06:40:19.831428] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:48.224 [2024-11-20 06:40:19.831438] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:48.224 [2024-11-20 06:40:19.832947] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:48.224 [2024-11-20 06:40:19.833016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:48.224 [2024-11-20 06:40:19.833078] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:48.224 [2024-11-20 06:40:19.833083] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:48.224 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:48.224 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:48.224 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:48.224 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:48.224 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.224 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:48.224 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:48.224 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.224 06:40:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.224 Malloc0 00:29:48.224 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.224 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:48.224 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.224 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.224 [2024-11-20 06:40:20.033883] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:48.224 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.224 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:48.224 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.224 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.224 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.225 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:48.225 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.225 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.225 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.225 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:48.225 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.225 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.482 [2024-11-20 06:40:20.062237] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.482 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.482 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:48.482 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:48.482 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:48.482 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:48.482 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2206345 00:29:48.482 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:29:48.482 06:40:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:50.398 06:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2206318 00:29:50.398 06:40:22 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 [2024-11-20 06:40:22.087335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Write completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 [2024-11-20 06:40:22.087670] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.398 starting I/O failed 00:29:50.398 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 [2024-11-20 06:40:22.087981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Read completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 Write completed with error (sct=0, sc=8) 00:29:50.399 starting I/O failed 00:29:50.399 [2024-11-20 06:40:22.088326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:50.399 [2024-11-20 06:40:22.088455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.399 [2024-11-20 06:40:22.088505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-11-20 06:40:22.088647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.399 [2024-11-20 06:40:22.088678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-11-20 06:40:22.088798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.399 [2024-11-20 06:40:22.088826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-11-20 06:40:22.088910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.399 [2024-11-20 06:40:22.088936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-11-20 06:40:22.089028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.399 [2024-11-20 06:40:22.089061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-11-20 06:40:22.089166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.399 [2024-11-20 06:40:22.089193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-11-20 06:40:22.089338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.399 [2024-11-20 06:40:22.089385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-11-20 06:40:22.089494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.399 [2024-11-20 06:40:22.089522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-11-20 06:40:22.089620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.399 [2024-11-20 06:40:22.089647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.399 qpair failed and we were unable to recover it. 00:29:50.399 [2024-11-20 06:40:22.089731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.089766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.089854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.089881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.090002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.090028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.090181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.090220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.090354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.090383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.090471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.090497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.090653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.090679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.090872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.090899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.091054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.091081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.091201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.091227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.091338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.091366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.091449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.091476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.091560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.091591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.091681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.091707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.091793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.091820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.092022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.092051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.092183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.092223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.092344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.092395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.092516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.092544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.092697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.092723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.092817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.092843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.093020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.093046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.093161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.093188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.093299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.093331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.093417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.093443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.093533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.093559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.093688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.093714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.093832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.093859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.093945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.093971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.094068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.094108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.094255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.094282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.094385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.094412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.094506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.094534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.400 [2024-11-20 06:40:22.094658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.400 [2024-11-20 06:40:22.094692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.400 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.094813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.094868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.095003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.095065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.095180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.095207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.095326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.095354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.095445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.095472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.095554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.095581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.095684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.095717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.095837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.095863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.095977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.096004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.096115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.096142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.096253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.096281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.096394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.096431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.096530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.096557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.096651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.096676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.096817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.096844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.096956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.096982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.097068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.097096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.097182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.097209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.097356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.097396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.097493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.097521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.097633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.097672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.097791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.097818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.097936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.097962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.098143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.098195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.098318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.098345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.098435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.098461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.098539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.098565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.098648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.098673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.098808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.401 [2024-11-20 06:40:22.098833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.401 qpair failed and we were unable to recover it. 00:29:50.401 [2024-11-20 06:40:22.098913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.098939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.099052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.099077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.099164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.099196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.099293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.099327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.099415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.099447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.099534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.099560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.099708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.099734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.099902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.099929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.100041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.100069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.100209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.100235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.100338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.100365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.100447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.100472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.100562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.100589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.100724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.100749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.100822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.100848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.100935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.100961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.101078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.101103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.101178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.101204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.101331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.101358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.101440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.101466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.101557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.101585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.101661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.101687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.101794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.101820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.101932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.101958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.102072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.102098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.102191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.102230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.102338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.102368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.102458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.102484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.102581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.102608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.102690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.102716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.102827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.102854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.102970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.103002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.103129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.103168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.103260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.103317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.103435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.103464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.103555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.402 [2024-11-20 06:40:22.103582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.402 qpair failed and we were unable to recover it. 00:29:50.402 [2024-11-20 06:40:22.103690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.103717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.103813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.103841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.103931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.103959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.104089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.104128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.104272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.104314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.104408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.104436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.104546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.104572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.104693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.104745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.104931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.104997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.105147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.105173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.105284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.105321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.105409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.105436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.105555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.105583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.105693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.105720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.105843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.105869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.106008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.106034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.106131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.106171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.106315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.106354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.106476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.106504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.106622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.106648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.106836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.106862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.106941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.106967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.107117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.107145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.107239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.107266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.107367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.107394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.403 qpair failed and we were unable to recover it. 00:29:50.403 [2024-11-20 06:40:22.107506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.403 [2024-11-20 06:40:22.107532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.107652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.107679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.107790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.107816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.107892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.107918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.108031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.108059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.108173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.108199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.108283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.108318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.108410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.108436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.108517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.108543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.108662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.108687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.108792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.108822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.108965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.108991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.109111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.109139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.109232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.109259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.109376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.109416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.109508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.109535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.109650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.109676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.109751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.109777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.109913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.109939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.110044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.110070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.110159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.110187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.110325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.110353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.110442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.110468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.110551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.110577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.110671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.110697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.110806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.110831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.110941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.110966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.111085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.111113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.111222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.111262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.111368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.111397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.111514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.111541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.111657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.111682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.111764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.111791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.111868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.111894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.404 [2024-11-20 06:40:22.111998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.404 [2024-11-20 06:40:22.112024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.404 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.112137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.112164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.112299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.112332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.112416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.112447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.112526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.112552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.112646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.112672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.112787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.112813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.112905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.112930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.113043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.113069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.113155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.113183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.113294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.113327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.113444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.113471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.113582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.113609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.113703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.113731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.113848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.113874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.113963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.113990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.114097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.114123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.114199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.114225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.114345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.114371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.114456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.114482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.114559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.114584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.114723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.114748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.114827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.114852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.114963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.114990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.115087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.115115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.115253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.115280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.115375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.115402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.115485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.115511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.405 [2024-11-20 06:40:22.115672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.405 [2024-11-20 06:40:22.115724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.405 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.115882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.115937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.116077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.116109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.116214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.116253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.116387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.116416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.116528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.116555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.116634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.116661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.116753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.116779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.116856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.116882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.116976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.117015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.117159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.117186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.117274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.117300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.117397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.117423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.117495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.117521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.117617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.117645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.117755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.117784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.117869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.117896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.118032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.118058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.118172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.118198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.118340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.118367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.118562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.118590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.118732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.118758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.118837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.118865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.118950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.118978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.119096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.119122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.119227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.119253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.119368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.119395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.119484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.119510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.119604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.119631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.119750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.119778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.406 [2024-11-20 06:40:22.119867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.406 [2024-11-20 06:40:22.119893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.406 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.120032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.120058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.120178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.120204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.120287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.120320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.120468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.120494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.120626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.120652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.120782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.120837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.120930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.120957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.121082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.121110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.121218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.121243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.121352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.121380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.121470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.121496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.121635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.121660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.121776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.121802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.121922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.121950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.122037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.122064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.122157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.122185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.122295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.122328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.122438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.122464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.122606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.122632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.122821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.122847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.122934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.122960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.123075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.123102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.123240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.123267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.123388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.123414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.123520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.123546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.123685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.123712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.123903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.123964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.124056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.124081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.124169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.124195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.407 [2024-11-20 06:40:22.124273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.407 [2024-11-20 06:40:22.124299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.407 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.124429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.124455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.124545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.124571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.124684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.124710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.124788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.124814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.124932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.124960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.125062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.125088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.125199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.125225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.125412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.125440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.125627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.125659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.125768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.125795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.125882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.125909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.126098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.126124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.126263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.126289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.126429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.126456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.126542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.126568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.126653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.126679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.126888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.126913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.126992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.127018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.127126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.127152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.127236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.127262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.127351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.127377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.127484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.127510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.127601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.127627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.127711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.127736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.127856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.127882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.127958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.127984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.128119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.128144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.128248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.128274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.128359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.128385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.128471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.128496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.128609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.128635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.128718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.128744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.408 [2024-11-20 06:40:22.128835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.408 [2024-11-20 06:40:22.128861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.408 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.128948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.128974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.129080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.129106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.129221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.129251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.129365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.129391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.129470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.129496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.129593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.129618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.129702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.129728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.129841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.129867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.129957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.129983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.130090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.130115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.130199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.130225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.130316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.130342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.130429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.130455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.130568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.130593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.130674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.130701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.130811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.130837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.130955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.130982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.131085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.131125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.131214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.131242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.131348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.131389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.131473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.131501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.131641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.131668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.131748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.131775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.131919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.131968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.132106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.409 [2024-11-20 06:40:22.132132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.409 qpair failed and we were unable to recover it. 00:29:50.409 [2024-11-20 06:40:22.132217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.132245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.132363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.132390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.132504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.132531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.132666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.132717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.132927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.132991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.133077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.133104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.133199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.133226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.133361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.133401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.133544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.133573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.133689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.133717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.133927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.133977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.134148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.134198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.134312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.134339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.134415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.134441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.134577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.134604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.134687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.134714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.134838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.134866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.134950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.134976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.135098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.135124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.135232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.135259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.135356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.135383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.135494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.135520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.135664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.135691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.135780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.135808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.135920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.135947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.136040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.136067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.136162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.136189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.136276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.136310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.136399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.136426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.136532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.136558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.136639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.410 [2024-11-20 06:40:22.136666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.410 qpair failed and we were unable to recover it. 00:29:50.410 [2024-11-20 06:40:22.136767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.136793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.136933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.136960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.137075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.137102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.137205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.137245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.137363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.137393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.137516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.137543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.137668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.137694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.137788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.137814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.137928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.137954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.138063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.138090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.138181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.138209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.138289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.138324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.138406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.138432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.138541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.138573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.138711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.138738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.138853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.138882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.138964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.138991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.139102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.139128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.139208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.139234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.139344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.139370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.139449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.139476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.139555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.139582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.139669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.139694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.139802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.139827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.139909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.139935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.140045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.140071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.140170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.140210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.140336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.140376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.140466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.140492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.140575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.140601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.140687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.140715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.140853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.140880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.140976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.141002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.141116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.141143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.411 [2024-11-20 06:40:22.141257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.411 [2024-11-20 06:40:22.141284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.411 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.141407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.141435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.141519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.141547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.141634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.141660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.141777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.141826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.141910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.141937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.142055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.142086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.142170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.142196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.142296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.142345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.142441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.142469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.142583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.142610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.142699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.142725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.142813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.142839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.142922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.142950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.143035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.143063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.143179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.143206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.143292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.143326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.143439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.143466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.143551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.143577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.143686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.143712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.143806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.143833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.143914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.143941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.144080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.144107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.144219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.144247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.144336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.144363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.144477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.144504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.144615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.144642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.144746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.144772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.144862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.144889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.145001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.145028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.145113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.145139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.145232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.145259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.145377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.145404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.145558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.145598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.145743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.412 [2024-11-20 06:40:22.145771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.412 qpair failed and we were unable to recover it. 00:29:50.412 [2024-11-20 06:40:22.145956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.146021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.146208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.146234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.146354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.146382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.146469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.146496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.146582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.146610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.146762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.146829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.146993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.147054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.147149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.147177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.147288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.147322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.147426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.147452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.147566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.147592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.147745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.147808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.148006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.148060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.148172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.148200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.148339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.148378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.148496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.148524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.148641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.148688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.148911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.148975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.149119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.149146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.149263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.149290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.149444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.149483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.149581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.149609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.149705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.149733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.149845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.149871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.149959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.149985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.150146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.150185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.150297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.150330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.150412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.150438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.150556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.150582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.150663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.150688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.150822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.150847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.413 [2024-11-20 06:40:22.150966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.413 [2024-11-20 06:40:22.151017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.413 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.151122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.151161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.151282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.151329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.151452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.151481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.151594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.151621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.151715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.151741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.151879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.151905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.151994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.152025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.152141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.152171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.152293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.152339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.152468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.152494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.152609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.152634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.152746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.152772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.152886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.152912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.152991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.153017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.153100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.153126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.153252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.153292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.153405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.153433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.153572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.153598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.153683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.153710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.153798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.153824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.153969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.153996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.154112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.154138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.154261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.154300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.154449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.154488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.154613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.154641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.154755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.154783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.414 qpair failed and we were unable to recover it. 00:29:50.414 [2024-11-20 06:40:22.154919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.414 [2024-11-20 06:40:22.154946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.155130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.155156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.155291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.155326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.155414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.155442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.155528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.155554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.155688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.155714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.155797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.155823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.155944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.155972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.156074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.156113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.156231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.156260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.156354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.156382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.156470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.156496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.156607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.156632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.156705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.156731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.156826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.156851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.156940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.156965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.157068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.157094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.157200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.157226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.157366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.157393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.157501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.157527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.157635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.157660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.157739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.157765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.157874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.157901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.158008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.158037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.158150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.158177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.158308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.158348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.158465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.158492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.158599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.158625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.158698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.158724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.158839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.158865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.158978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.159004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.159107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.159133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.159247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.159273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.415 [2024-11-20 06:40:22.159400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.415 [2024-11-20 06:40:22.159426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.415 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.159524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.159564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.159683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.159711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.159819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.159846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.159956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.159982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.160074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.160102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.160195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.160221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.160317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.160345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.160457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.160482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.160569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.160595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.160672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.160698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.160821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.160860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.160962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.160991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.161106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.161134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.161216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.161247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.161344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.161371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.161459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.161486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.161574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.161601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.161708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.161734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.161815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.161843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.161953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.161981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.162090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.162116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.162225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.162251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.162362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.162387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.162467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.162493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.162607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.162633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.162748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.162774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.162900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.162940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.163081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.163110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.416 [2024-11-20 06:40:22.163194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.416 [2024-11-20 06:40:22.163223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.416 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.163315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.163342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.163430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.163457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.163548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.163574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.163697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.163743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.163858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.163884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.164001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.164029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.164143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.164169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.164278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.164310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.164393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.164419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.164522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.164548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.164638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.164666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.164753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.164785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.164866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.164892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.165007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.165033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.165116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.165142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.165239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.165279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.165384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.165423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.165550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.165577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.165717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.165744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.165827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.165853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.165938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.165966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.166055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.166082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.166187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.166226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.166318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.166347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.166461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.166487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.166584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.166610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.166685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.166711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.166797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.166823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.166900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.166928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.167037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.167064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.167174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.167201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.167286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.167323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.167419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.167445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.417 qpair failed and we were unable to recover it. 00:29:50.417 [2024-11-20 06:40:22.167553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.417 [2024-11-20 06:40:22.167578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.167730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.167782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.167961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.168012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.168119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.168145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.168253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.168279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.168365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.168393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.168506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.168532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.168609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.168635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.168788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.168838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.168979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.169033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.169149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.169177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.169264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.169292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.169386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.169413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.169500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.169527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.169665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.169691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.169831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.169882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.170003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.170055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.170137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.170164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.170256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.170285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.170392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.170419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.170501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.170526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.170660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.170686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.170789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.170847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.170972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.171018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.171107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.171132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.171272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.171300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.171428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.171456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.171566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.171592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.171698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.171755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.171942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.171990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.172095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.172151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.172232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.172258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.172407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.172434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.418 [2024-11-20 06:40:22.172520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.418 [2024-11-20 06:40:22.172546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.418 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.172668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.172695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.172834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.172861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.172948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.172974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.173084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.173112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.173222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.173248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.173358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.173398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.173498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.173525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.173665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.173691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.173780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.173824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.174051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.174078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.174191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.174218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.174309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.174342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.174424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.174450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.174543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.174569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.174660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.174687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.174766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.174793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.174905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.174931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.175041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.175067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.175151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.175178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.175259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.175284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.175373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.175399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.175504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.175530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.175607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.175633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.175722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.175748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.175840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.175895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.176036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.176086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.176267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.176292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.176380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.176406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.176489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.176515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.176648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.176672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.419 qpair failed and we were unable to recover it. 00:29:50.419 [2024-11-20 06:40:22.176788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.419 [2024-11-20 06:40:22.176830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.177075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.177101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.177186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.177212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.177353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.177382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.177479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.177506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.177619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.177645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.177735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.177761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.177876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.177902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.178018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.178044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.178122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.178149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.178239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.178265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.178363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.178391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.178468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.178494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.178603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.178629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.178718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.178745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.178882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.178910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.179017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.179042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.179128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.179153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.179238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.179263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.179354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.179380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.179487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.179512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.179626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.179658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.179749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.179774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.179867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.179893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.179977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.180005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.180080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.180106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.420 qpair failed and we were unable to recover it. 00:29:50.420 [2024-11-20 06:40:22.180212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.420 [2024-11-20 06:40:22.180239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.180346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.180373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.180458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.180484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.180589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.180615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.180696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.180722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.180860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.180887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.180994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.181020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.181101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.181129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.181223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.181262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.181378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.181408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.181493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.181520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.181632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.181660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.181740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.181766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.181850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.181878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.181977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.182005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.182155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.182194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.182317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.182347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.182464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.182492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.182606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.182633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.182774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.182825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.182996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.183050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.183143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.183171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.183301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.183350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.183473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.183500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.183615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.183640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.183731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.183756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.183875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.183926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.184036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.184062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.184196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.184222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.184310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.184337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.184444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.184469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.184589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.184614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.184722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.184748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.184858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.184884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.184976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.185015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.185114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.421 [2024-11-20 06:40:22.185143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.421 qpair failed and we were unable to recover it. 00:29:50.421 [2024-11-20 06:40:22.185278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.185329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.185428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.185456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.185572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.185599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.185712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.185738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.185818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.185844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.185954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.185980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.186095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.186123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.186213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.186240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.186369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.186408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.186532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.186559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.186645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.186671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.186782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.186807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.186955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.187007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.187149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.187176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.187334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.187373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.187495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.187522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.187685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.187751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.187847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.187876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.188016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.188070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.188158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.188184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.188273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.188298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.188394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.188422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.188556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.188582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.188716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.188742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.188865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.188892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.188978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.189004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.189118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.189150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.189240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.189266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.189378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.189405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.189485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.189511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.189591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.422 [2024-11-20 06:40:22.189616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.422 qpair failed and we were unable to recover it. 00:29:50.422 [2024-11-20 06:40:22.189706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.189732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.189822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.189848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.189931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.189958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.190093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.190119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.190251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.190290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.190415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.190444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.190533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.190572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.190729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.190783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.190939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.190991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.191119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.191167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.191276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.191309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.191421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.191446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.191559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.191585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.191689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.191715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.191826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.191852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.191993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.192018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.192126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.192151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.192262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.192288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.192380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.192405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.192538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.192563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.192700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.192725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.192811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.192836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.192916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.192946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.193035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.193060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.193139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.193165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.193306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.193332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.193446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.193471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.193565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.193592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.193706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.193731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.193819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.193844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.193952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.193978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.194076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.194116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.423 qpair failed and we were unable to recover it. 00:29:50.423 [2024-11-20 06:40:22.194260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.423 [2024-11-20 06:40:22.194288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.194408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.194434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.194525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.194552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.194665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.194691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.194784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.194809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.194925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.194975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.195066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.195093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.195180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.195206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.195300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.195332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.195419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.195445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.195553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.195579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.195665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.195693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.195829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.195856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.195992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.196019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.196102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.196127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.196297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.196331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.196448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.196474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.196589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.196621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.196709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.196735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.196852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.196878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.196991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.197018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.197104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.197130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.197212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.197239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.197351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.197378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.197486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.197511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.197598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.197624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.197735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.197761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.197876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.197901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.198036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.198062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.198180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.198208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.198296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.198328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.198447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.198473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.198583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.424 [2024-11-20 06:40:22.198608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.424 qpair failed and we were unable to recover it. 00:29:50.424 [2024-11-20 06:40:22.198750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.198776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.198888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.198914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.199002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.199029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.199118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.199143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.199245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.199284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.199388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.199417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.199499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.199526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.199618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.199644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.199749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.199804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.199940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.199991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.200068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.200095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.200213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.200242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.200324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.200351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.200432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.200459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.200595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.200621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.200700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.200726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.200816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.200843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.200954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.200980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.201080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.201120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.201216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.201244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.201335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.201363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.201447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.201474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.201556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.201583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.201702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.201728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.201865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.201896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.202016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.425 [2024-11-20 06:40:22.202042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.425 qpair failed and we were unable to recover it. 00:29:50.425 [2024-11-20 06:40:22.202186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.202213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.202368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.202408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.202499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.202527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.202624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.202652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.202765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.202791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.202898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.202924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.203035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.203061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.203146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.203172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.203291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.203328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.203468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.203494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.203609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.203635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.203711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.203736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.203852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.203877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.203983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.204008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.204094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.204119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.204225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.204250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.204340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.204369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.204459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.204498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.204596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.204635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.204732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.204759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.204870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.204896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.204980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.205006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.205122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.205150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.205263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.205290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.205396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.205425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.205517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.205550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.205670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.205696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.205781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.205807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.205892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.205918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.426 [2024-11-20 06:40:22.206005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.426 [2024-11-20 06:40:22.206030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.426 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.206132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.206157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.206295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.206329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.206415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.206440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.206528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.206553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.206659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.206683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.206774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.206799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.206886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.206910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.206986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.207010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.207098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.207122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.207203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.207227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.207325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.207351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.207560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.207586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.207680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.207707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.207797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.207822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.207922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.207947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.208038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.208064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.208150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.208175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.208284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.208318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.208407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.208433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.208520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.208544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.208635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.208660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.208774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.208802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.208914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.208946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.209061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.209087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.209192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.209218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.209312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.209346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.209443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.209470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.209554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.209581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.209664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.209690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.209786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.209812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.209897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.209923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.210040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.210066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.210178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.427 [2024-11-20 06:40:22.210204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.427 qpair failed and we were unable to recover it. 00:29:50.427 [2024-11-20 06:40:22.210321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.210348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.210446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.210485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.210591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.210631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.210759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.210787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.210900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.210927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.211041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.211067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.211156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.211182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.211267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.211293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.211480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.211508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.211599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.211624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.211705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.211731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.211819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.211845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.211932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.211959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.212071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.212096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.212209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.212235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.212314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.212340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.212428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.212460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.212552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.212579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.212671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.212703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.212810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.212863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.212953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.212981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.213165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.213192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.213326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.213353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.213454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.213484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.213560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.213587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.213674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.213700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.213790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.213815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.213934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.213961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.214052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.214081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.214202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.214232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.214331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.214359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.214453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.214479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.428 [2024-11-20 06:40:22.214565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.428 [2024-11-20 06:40:22.214591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.428 qpair failed and we were unable to recover it. 00:29:50.429 [2024-11-20 06:40:22.214671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.429 [2024-11-20 06:40:22.214696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.429 qpair failed and we were unable to recover it. 00:29:50.429 [2024-11-20 06:40:22.214808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.429 [2024-11-20 06:40:22.214840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.429 qpair failed and we were unable to recover it. 00:29:50.429 [2024-11-20 06:40:22.214935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.429 [2024-11-20 06:40:22.214964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.429 qpair failed and we were unable to recover it. 00:29:50.429 [2024-11-20 06:40:22.215082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.429 [2024-11-20 06:40:22.215108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.429 qpair failed and we were unable to recover it. 00:29:50.429 [2024-11-20 06:40:22.215223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.429 [2024-11-20 06:40:22.215250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.429 qpair failed and we were unable to recover it. 00:29:50.429 [2024-11-20 06:40:22.215344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.429 [2024-11-20 06:40:22.215371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.429 qpair failed and we were unable to recover it. 00:29:50.429 [2024-11-20 06:40:22.215463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.429 [2024-11-20 06:40:22.215489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.429 qpair failed and we were unable to recover it. 00:29:50.429 [2024-11-20 06:40:22.215572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.429 [2024-11-20 06:40:22.215599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.429 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.215685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.215711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.215801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.215831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.215925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.215954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.216057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.216095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.216192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.216222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.216325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.216354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.216444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.216472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.216557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.216587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.216695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.216723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.216809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.216836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.216923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.216950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.217059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.217085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.217173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.217199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.217290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.217325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.217419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.217445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.217544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.217576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.217714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.217743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.217825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.217850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.217942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.217967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.218051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.218077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.218173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.218212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.218319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.218347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.218439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.218466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.218559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.218585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.218694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.218741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.218860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.714 [2024-11-20 06:40:22.218886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.714 qpair failed and we were unable to recover it. 00:29:50.714 [2024-11-20 06:40:22.218974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.219000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.219110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.219136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.219210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.219237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.219341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.219370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.219455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.219481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.219605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.219632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.219717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.219743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.219834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.219861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.219993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.220018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.220136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.220163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.220275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.220300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.220394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.220420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.220505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.220530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.220614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.220640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.220714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.220740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.220820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.220846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.220944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.220982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.221076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.221107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.221231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.221258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.221362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.221390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.221477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.221504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.221618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.221644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.221726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.221752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.221867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.221894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.221973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.222001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.222099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.222126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.222248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.222279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.222385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.222414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.222496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.222523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.222644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.222677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.222767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.222794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.222920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.222948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.223058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.223098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.223224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.223269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.223376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.223405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.223492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.223518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.223606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.223632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.715 [2024-11-20 06:40:22.223798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.715 [2024-11-20 06:40:22.223852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.715 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.223945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.223971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.224055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.224081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.224162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.224188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.224285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.224319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.224436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.224462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.224566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.224606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.224706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.224735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.224833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.224861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.224946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.224973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.225068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.225096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.225180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.225206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.225289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.225322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.225441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.225467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.225552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.225598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.225711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.225747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.225865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.225902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.226046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.226072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.226159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.226185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.226263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.226293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.226395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.226420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.226508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.226533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.226697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.226732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.226890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.226926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.227097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.227141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.227235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.227265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.227390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.227417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.227507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.227533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.227624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.227650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.227735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.227762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.227874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.227900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.227982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.228009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.228110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.228150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.228245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.228273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.228388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.228417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.228509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.228535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.228613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.228640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.228735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.228761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.716 qpair failed and we were unable to recover it. 00:29:50.716 [2024-11-20 06:40:22.228846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.716 [2024-11-20 06:40:22.228874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.229001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.229031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.229156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.229185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.229266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.229293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.229398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.229424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.229507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.229534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.229651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.229677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.229795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.229823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.229918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.229945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.230036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.230062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.230175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.230200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.230289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.230322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.230417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.230442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.230539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.230565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.230656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.230681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.230842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.230868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.231009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.231037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.231120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.231146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.231268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.231295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.231413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.231440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.231524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.231551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.231636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.231667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.231750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.231776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.231865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.231891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.232000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.232027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.232133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.232159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.232249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.232276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.232364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.232391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.232474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.232500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.232584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.232610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.232695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.232721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.232841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.232868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.232950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.232976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.233093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.233119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.233204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.233233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.233335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.233363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.233448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.233474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.233583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.717 [2024-11-20 06:40:22.233608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.717 qpair failed and we were unable to recover it. 00:29:50.717 [2024-11-20 06:40:22.233691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.233718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.233803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.233830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.233949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.233977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.234077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.234116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.234236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.234263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.234356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.234383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.234474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.234499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.234578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.234604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.234692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.234719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.234813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.234839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.234935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.234974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.235094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.235123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.235215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.235243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.235360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.235389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.235500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.235526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.235640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.235666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.235748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.235775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.235894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.235920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.236066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.236093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.236184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.236212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.236328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.236354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.236455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.236483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.236589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.236624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.236754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.236787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.236907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.236933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.237018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.237046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.237153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.237178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.237278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.237317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.237415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.237444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.237529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.237555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.237642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.237669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.237787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.237814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.718 qpair failed and we were unable to recover it. 00:29:50.718 [2024-11-20 06:40:22.237944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.718 [2024-11-20 06:40:22.237983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.238105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.238133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.238230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.238257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.238351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.238376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.238453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.238479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.238570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.238596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.238709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.238734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.238850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.238876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.238989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.239016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.239103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.239131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.239254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.239283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.239379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.239408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.239501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.239526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.239639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.239666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.239788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.239836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.239920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.239945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.240031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.240057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.240169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.240195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.240282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.240313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.240409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.240436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.240527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.240554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.240642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.240669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.240754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.240780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.240864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.240890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.240979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.241007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.241094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.241121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.241250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.241288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.241416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.241443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.241537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.241564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.241651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.241677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.241765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.241792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.241875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.241903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.242002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.242029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.242156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.242198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.242318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.242347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.242443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.242471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.242560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.719 [2024-11-20 06:40:22.242587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.719 qpair failed and we were unable to recover it. 00:29:50.719 [2024-11-20 06:40:22.242681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.242709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.242797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.242824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.242913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.242940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.243053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.243079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.243166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.243191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.243273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.243300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.243395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.243421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.243507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.243533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.243635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.243662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.243748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.243774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.243859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.243888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.243981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.244009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.244099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.244124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.244234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.244260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.244376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.244403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.244496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.244522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.244604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.244632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.244720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.244746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.244885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.244933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.245052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.245078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.245158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.245184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.245289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.245341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.245441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.245468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.245555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.245580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.245658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.245684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.245795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.245845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.245998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.246033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.246170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.246196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.246311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.246338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.246428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.246453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.246534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.246561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.246644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.246669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.720 qpair failed and we were unable to recover it. 00:29:50.720 [2024-11-20 06:40:22.246745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.720 [2024-11-20 06:40:22.246770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.246886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.246912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.247037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.247072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.247237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.247277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.247374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.247403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.247518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.247544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.247669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.247696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.247784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.247810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.247901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.247927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.248043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.248071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.248156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.248182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.248264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.248289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.248378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.248404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.248510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.248536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.248622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.248647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.248736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.248762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.248877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.248903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.249016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.249042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.249157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.249186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.249291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.249337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.249437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.249466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.249551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.249577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.249665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.249693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.249798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.249825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.249905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.249932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.250023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.250051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.250135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.250161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.250268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.250295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.250409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.250435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.250547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.250578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.250664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.250689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.250812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.250839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.250929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.250967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.251068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.251106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.251200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.721 [2024-11-20 06:40:22.251226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.721 qpair failed and we were unable to recover it. 00:29:50.721 [2024-11-20 06:40:22.251339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.251367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.251456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.251483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.251561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.251589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.251687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.251715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.251823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.251849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.251933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.251960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.252046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.252074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.252168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.252207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.252318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.252347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.252439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.252465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.252547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.252574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.252667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.252693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.252785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.252811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.252899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.252926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.253021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.253060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.253163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.253192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.253327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.253354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.253438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.253464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.253554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.253582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.253696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.253722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.253811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.253837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.253922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.253956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.254040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.254069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.254151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.254178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.254273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.254310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.254401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.254429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.254541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.254566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.254675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.254701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.254812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.254838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.254920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.254946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.255066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.722 [2024-11-20 06:40:22.255094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.722 qpair failed and we were unable to recover it. 00:29:50.722 [2024-11-20 06:40:22.255219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.255259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.255358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.255387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.255473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.255499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.255584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.255612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.255726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.255752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.255887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.255923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.256052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.256085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.256177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.256203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.256299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.256334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.256428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.256454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.256536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.256563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.256671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.256696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.256781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.256808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.256901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.256927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.257009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.257035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.257151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.257179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.257269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.257295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.257392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.257420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.257511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.257543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.257683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.257710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.257793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.257818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.257953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.258003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.258118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.258145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.258227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.258256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.258349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.258377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.258469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.258495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.258574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.258600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.258679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.258706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.258796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.258838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.259004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.259055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.259174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.259207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.259287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.259326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.723 qpair failed and we were unable to recover it. 00:29:50.723 [2024-11-20 06:40:22.259407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.723 [2024-11-20 06:40:22.259433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.259523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.259549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.259693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.259742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.259851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.259886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.260013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.260065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.260206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.260233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.260328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.260355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.260439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.260465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.260549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.260574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.260665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.260692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.260785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.260814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.260930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.260956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.261083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.261111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.261228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.261254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.261367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.261393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.261489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.261515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.261713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.261741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.261882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.261931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.262070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.262096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.262204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.262231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.262319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.262346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.262455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.262482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.262570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.262596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.262681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.262707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.262841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.262867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.262946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.262977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.263093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.263120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.263211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.263237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.263332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.263360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.724 [2024-11-20 06:40:22.263450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.724 [2024-11-20 06:40:22.263476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.724 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.263566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.263592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.263678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.263703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.263812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.263837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.263948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.263974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.264066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.264091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.264189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.264228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.264339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.264378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.264471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.264499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.264591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.264617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.264761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.264788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.264873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.264899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.265006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.265033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.265117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.265144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.265245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.265285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.265423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.265451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.265533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.265560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.265635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.265660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.265748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.265776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.265851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.265878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.265967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.265995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.266111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.266137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.266217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.266243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.266367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.266393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.266473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.266500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.266591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.266617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.266708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.266734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.266827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.266855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.266955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.266987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.267103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.267131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.267225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.267251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.267349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.267377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.267490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.267517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.267603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.267630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.267706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.267732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.267842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.267869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.725 qpair failed and we were unable to recover it. 00:29:50.725 [2024-11-20 06:40:22.267968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.725 [2024-11-20 06:40:22.268001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.268117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.268143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.268268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.268313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.268405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.268433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.268527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.268554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.268663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.268689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.268765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.268792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.268889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.268927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.269056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.269084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.269204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.269232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.269347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.269374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.269462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.269488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.269607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.269633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.269717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.269743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.269848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.269878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.269971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.270004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.270123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.270151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.270233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.270259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.270358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.270387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.270503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.270529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.270639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.270666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.270778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.270803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.270888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.270915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.271001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.271027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.271108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.271135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.271213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.271239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.271347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.271373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.271452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.271480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.271562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.271589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.271678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.271704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.271798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.271824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.726 qpair failed and we were unable to recover it. 00:29:50.726 [2024-11-20 06:40:22.271944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.726 [2024-11-20 06:40:22.271973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.272054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.272080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.272169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.272197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.272279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.272311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.272405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.272432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.272521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.272547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.272667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.272693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.272782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.272808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.272889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.272915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.273028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.273061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.273160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.273186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.273273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.273299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.273393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.273418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.273508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.273534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.273625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.273652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.273757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.273785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.273861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.273887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.273973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.274001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.274083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.274109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.274236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.274277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.274381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.274408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.274486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.274512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.274676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.274724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.274811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.274838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.274961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.274987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.275084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.275112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.275218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.275244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.275361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.275389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.275479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.275506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.275589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.275615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.275723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.275750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.275873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.275901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.276017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.276065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.276179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.276207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.276289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.727 [2024-11-20 06:40:22.276328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.727 qpair failed and we were unable to recover it. 00:29:50.727 [2024-11-20 06:40:22.276419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.276446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.276543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.276575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.276695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.276721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.276856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.276889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.276973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.277002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.277086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.277114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.277213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.277252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.277353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.277383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.277466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.277491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.277585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.277610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.277719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.277745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.277860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.277887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.278001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.278028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.278113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.278143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.278260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.278292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.278403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.278430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.278512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.278538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.278653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.278678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.278759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.278786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.278910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.278936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.279054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.279081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.279201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.279229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.279340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.279380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.279502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.279531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.279648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.279675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.279795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.279821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.279904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.279930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.280050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.280078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.280175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.280221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.280317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.280345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.280459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.280486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.280566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.280592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.280677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.280703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.280795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.280822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.280906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.280934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.281017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.281043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.281153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.281181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.728 qpair failed and we were unable to recover it. 00:29:50.728 [2024-11-20 06:40:22.281273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.728 [2024-11-20 06:40:22.281308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.281408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.281438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.281557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.281583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.281658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.281684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.281770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.281796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.281942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.281978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.282121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.282149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.282241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.282269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.282371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.282398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.282483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.282509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.282613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.282649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.282800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.282849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.282956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.282981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.283068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.283096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.283209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.283237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.283331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.283359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.283443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.283470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.283554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.283580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.283696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.283723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.283819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.283848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.283962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.283991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.284106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.284133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.284219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.284246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.284331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.284358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.284434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.284459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.284534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.284561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.284672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.284697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.284806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.284833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.284942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.284969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.285064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.285091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.285192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.285232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.285329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.285363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.285452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.285479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.729 qpair failed and we were unable to recover it. 00:29:50.729 [2024-11-20 06:40:22.285573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.729 [2024-11-20 06:40:22.285600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.285711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.285738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.285828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.285854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.285969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.285997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.286089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.286116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.286203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.286229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.286317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.286345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.286434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.286462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.286540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.286565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.286675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.286700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.286802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.286827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.286903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.286945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.287070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.287098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.287191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.287218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.287342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.287371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.287460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.287486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.287583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.287641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.287772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.287823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.287948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.287985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.288121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.288149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.288262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.288288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.288382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.288409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.288497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.288523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.288605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.288630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.288712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.288741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.288861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.288917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.289043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.289094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.289185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.289211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.289333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.289359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.289449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.289478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.289595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.289647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.289789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.289833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.289941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.289991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.290138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.290165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.290249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.290275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.290451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.290478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.290569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.290598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.730 [2024-11-20 06:40:22.290712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.730 [2024-11-20 06:40:22.290739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.730 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.290847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.290894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.291048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.291099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.291215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.291243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.291337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.291364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.291454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.291481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.291566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.291593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.291684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.291710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.291805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.291832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.291923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.291951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.292040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.292067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.292182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.292208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.292312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.292341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.292435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.292460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.292543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.292570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.292665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.292690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.292823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.292849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.292936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.292961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.293092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.293118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.293206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.293232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.293327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.293355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.293470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.293496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.293583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.293627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.293743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.293769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.293917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.293949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.294063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.294111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.294229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.294254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.294357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.294386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.294483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.294514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.294600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.294629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.294711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.294738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.294845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.294893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.295038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.295065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.295148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.295175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.295260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.295287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.295381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.295408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.295480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.295507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.295630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.295656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.731 [2024-11-20 06:40:22.295779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.731 [2024-11-20 06:40:22.295805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.731 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.295921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.295947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.296058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.296084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.296195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.296221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.296331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.296359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.296478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.296504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.296594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.296621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.296736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.296763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.296861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.296896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.297013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.297041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.297158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.297186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.297314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.297343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.297432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.297459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.297598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.297625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.297706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.297736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.297879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.297906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.297987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.298012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.298127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.298154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.298253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.298296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.298418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.298457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.298575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.298602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.298778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.298813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.298974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.299008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.299173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.299201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.299338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.299378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.299480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.299508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.299611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.299637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.299763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.299810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.299926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.299974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.300111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.300138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.300255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.300288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.300392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.300431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.300523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.300551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.300661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.300687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.300791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.300826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.300948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.300994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.301135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.301161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.301236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.301262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.732 [2024-11-20 06:40:22.301380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.732 [2024-11-20 06:40:22.301420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.732 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.301515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.301542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.301662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.301689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.301803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.301829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.301911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.301937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.302041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.302067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.302255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.302282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.302407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.302436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.302537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.302576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.302665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.302692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.302787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.302816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.302898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.302924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.303061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.303100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.303219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.303247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.303342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.303370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.303460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.303487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.303568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.303594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.303698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.303732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.303860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.303887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.303992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.304022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.304165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.304194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.304329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.304358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.304448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.304476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.304563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.304589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.304709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.304736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.304819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.304846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.304946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.304974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.305055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.305083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.305163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.305188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.305316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.305343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.305453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.305478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.305563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.305589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.305675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.305703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.305792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.305819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.733 [2024-11-20 06:40:22.305905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.733 [2024-11-20 06:40:22.305932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.733 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.306025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.306052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.306171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.306199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.306282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.306323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.306449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.306476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.306565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.306591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.306719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.306770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.306892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.306917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.307090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.307140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.307251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.307290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.307396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.307424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.307504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.307531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.307699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.307750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.307860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.307910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.308026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.308051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.308145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.308171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.308255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.308281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.308393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.308421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.308506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.308534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.308632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.308658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.308742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.308768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.308856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.308882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.308990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.309017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.309101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.309127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.309234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.309261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.309372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.309417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.309506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.309534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.309632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.309660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.309769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.309795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.309898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.309924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.310021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.310055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.310171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.310197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.310324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.310351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.310434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.310460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.310565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.310592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.310674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.310700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.310784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.310813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.310930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.310959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.734 [2024-11-20 06:40:22.311067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.734 [2024-11-20 06:40:22.311106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.734 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.311238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.311266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.311362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.311389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.311497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.311523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.311625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.311651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.311789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.311814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.311901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.311926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.312009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.312035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.312118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.312143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.312225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.312251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.312338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.312364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.312452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.312477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.312596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.312621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.312736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.312762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.312844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.312871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.313005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.313038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.313149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.313175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.313287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.313322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.313408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.313434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.313518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.313543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.313629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.313655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.313742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.313768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.313864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.313914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.314024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.314068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.314229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.314263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.314356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.314384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.314481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.314508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.314635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.314688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.314826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.314874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.315004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.315050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.315142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.315169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.315281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.315315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.315397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.315424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.315543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.315570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.315674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.315703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.315810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.315837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.315944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.315970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.316080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.316105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.735 [2024-11-20 06:40:22.316201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.735 [2024-11-20 06:40:22.316229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.735 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.316355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.316395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.316484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.316513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.316622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.316649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.316756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.316783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.316899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.316925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.317038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.317065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.317174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.317201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.317284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.317320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.317431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.317458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.317541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.317568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.317655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.317682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.317774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.317800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.317914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.317941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.318035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.318074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.318165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.318192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.318309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.318357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.318459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.318487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.318603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.318630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.318718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.318744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.318853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.318879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.318960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.318986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.319075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.319103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.319183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.319209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.319292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.319327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.319409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.319435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.319548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.319576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.319659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.319686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.319798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.319825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.319914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.319940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.320035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.320062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.320175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.320202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.320286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.320322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.320436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.320462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.320544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.320570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.320659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.320687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.320775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.320802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.320883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.320909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.736 [2024-11-20 06:40:22.321028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.736 [2024-11-20 06:40:22.321054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.736 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.321131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.321157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.321271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.321297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.321431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.321458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.321545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.321572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.321665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.321692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.321782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.321808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.321888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.321914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.322025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.322052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.322161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.322187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.322286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.322337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.322435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.322470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.322596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.322623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.322738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.322764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.322841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.322867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.322985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.323019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.323136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.323162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.323238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.323264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.323349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.323376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.323474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.323501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.323631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.323671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.323787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.323814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.323925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.323951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.324037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.324063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.324171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.324196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.324290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.324325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.324416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.324443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.324537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.324564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.324651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.324680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.324816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.324842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.324932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.324961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.325050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.325076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.325228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.325255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.325378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.325405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.325496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.325522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.325616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.325642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.325724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.325751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.325896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.325922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.737 [2024-11-20 06:40:22.326032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.737 [2024-11-20 06:40:22.326058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.737 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.326137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.326164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.326242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.326270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.326365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.326392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.326489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.326515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.326598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.326625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.326714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.326741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.326826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.326857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.326966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.326993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.327074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.327100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.327185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.327211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.327291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.327326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.327418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.327444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.327553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.327581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.327670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.327696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.327776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.327803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.327897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.327923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.328006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.328032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.328141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.328168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.328276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.328309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.328405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.328431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.328549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.328576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.328668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.328694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.328836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.328862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.328951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.328977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.329096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.329123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.329211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.329237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.329327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.329354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.329443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.329470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.329551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.329577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.329661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.329687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.329772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.329943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.329969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.330077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.330103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.738 [2024-11-20 06:40:22.330215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.738 [2024-11-20 06:40:22.330241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.738 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.330335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.330363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.330447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.330473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.330591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.330617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.330724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.330750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.330842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.330868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.330976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.331002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.331121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.331147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.331263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.331289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.331391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.331417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.331494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.331520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.331608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.331635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.331720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.331746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.331826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.331856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.331994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.332021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.332155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.332181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.332264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.332290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.332400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.332427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.332524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.332550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.332674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.332701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.332784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.332810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.332894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.332920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.333038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.333064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.333145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.333173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.333280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.333326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.333457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.333485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.333611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.333645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.333745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.333771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.333884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.333911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.334004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.334030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.334146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.334178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.334286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.334323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.334408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.334435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.334523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.334549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.334663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.334691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.334777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.334803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.334938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.334964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.335044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.335075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.739 [2024-11-20 06:40:22.335175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.739 [2024-11-20 06:40:22.335214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.739 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.335348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.335376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.335494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.335529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.335618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.335645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.335750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.335802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.335918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.335945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.336061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.336087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.336179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.336206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.336289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.336321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.336418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.336445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.336560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.336587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.336675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.336701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.336792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.336820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.336937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.336963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.337068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.337097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.337188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.337214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.337320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.337349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.337434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.337460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.337535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.337568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.337653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.337679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.337773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.337798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.337892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.337917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.337998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.338029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.338125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.338151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.338235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.338260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.338385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.338413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.338486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.338511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.338631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.338658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.338771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.338797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.338882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.338907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.338989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.339017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.339100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.339127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.339226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.339253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.339372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.339399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.339515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.339542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.339627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.339653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.339763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.339789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.339876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.339903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.740 [2024-11-20 06:40:22.339996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.740 [2024-11-20 06:40:22.340023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.740 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.340111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.340137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.340216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.340243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.340338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.340365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.340479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.340510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.340618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.340646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.340729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.340755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.340867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.340894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.341016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.341042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.341125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.341151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.341262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.341288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.341375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.341402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.341488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.341515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.341627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.341654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.341735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.341763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.341845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.341871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.341954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.341982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.342095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.342121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.342247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.342274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.342405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.342432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.342513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.342540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.342627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.342655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.342748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.342774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.342863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.342890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.342974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.343001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.343095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.343122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.343201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.343228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.343319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.343346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.343439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.343465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.343555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.343581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.343664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.343691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.343792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.343818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.343934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.343960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.344041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.344068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.344152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.344178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.344294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.344325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.344439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.344464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.344549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.344575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.741 qpair failed and we were unable to recover it. 00:29:50.741 [2024-11-20 06:40:22.344692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.741 [2024-11-20 06:40:22.344718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.344802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.344828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.344940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.344967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.345101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.345127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.345237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.345263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.345352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.345379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.345459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.345490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.345583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.345609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.345700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.345727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.345808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.345835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.345920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.345947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.346064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.346090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.346191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.346217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.346298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.346329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.346447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.346473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.346561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.346589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.346679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.346705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.346792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.346818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.346953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.346992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.347087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.347114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.347203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.347230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.347323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.347350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.347428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.347456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.347569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.347595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.347706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.347732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.347845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.347877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.347977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.348004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.348090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.348116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.348205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.348232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.348325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.348362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.348474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.348501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.348590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.348617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.348711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.348737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.742 [2024-11-20 06:40:22.348851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.742 [2024-11-20 06:40:22.348883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.742 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.348974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.349001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.349088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.349115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.349200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.349226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.349316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.349343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.349451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.349478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.349594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.349621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.349721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.349747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.349830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.349857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.349969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.349995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.350118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.350145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.350228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.350254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.350344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.350371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.350480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.350506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.350598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.350625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.350707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.350733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.350819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.350845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.350958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.350984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.351123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.351149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.351229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.351255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.351351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.351380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.351494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.351521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.351605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.351632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.351738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.351764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.351857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.351884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.351967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.351993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.352112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.352139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.352262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.352289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.352416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.352443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.352557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.352584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.352687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.352713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.352824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.352850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.352948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.352975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.353058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.353084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.353193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.353219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.353311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.353338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.353420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.353447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.353533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.353559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.743 [2024-11-20 06:40:22.353696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.743 [2024-11-20 06:40:22.353723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.743 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.353837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.353864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.353943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.353974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.354063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.354089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.354198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.354225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.354375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.354402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.354516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.354543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.354648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.354674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.354755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.354781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.354858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.354884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.354971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.354997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.355115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.355142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.355247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.355273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.355394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.355421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.355536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.355563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.355651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.355677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.355762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.355788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.355879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.355907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.355995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.356021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.356103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.356130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.356274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.356301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.356398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.356425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.356539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.356566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.356682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.356708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.356841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.356867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.356977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.357003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.357084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.357110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.357248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.357287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.357416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.357450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.357550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.357590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.357672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.357700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.357793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.357821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.357914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.357940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.358058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.358092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.358212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.358246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.358396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.358422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.358535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.358561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.358682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.358708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.744 [2024-11-20 06:40:22.358798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.744 [2024-11-20 06:40:22.358826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.744 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.358920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.358947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.359059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.359086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.359218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.359251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.359409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.359442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.359533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.359561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.359651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.359677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.359793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.359820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.359930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.359956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.360033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.360059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.360154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.360180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.360264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.360290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.360389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.360415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.360520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.360545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.360644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.360670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.360804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.360831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.360909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.360935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.361067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.361093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.361271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.361314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.361453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.361479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.361561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.361588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.361698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.361731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.361866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.361901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.362040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.362073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.362215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.362248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.362373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.362401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.362512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.362538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.362651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.362677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.362771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.362796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.362901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.362934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.363093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.363127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.363252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.363291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.363424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.363452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.363554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.363594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.363693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.363721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.363806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.363833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.363950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.363977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.364060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.364087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.745 qpair failed and we were unable to recover it. 00:29:50.745 [2024-11-20 06:40:22.364197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.745 [2024-11-20 06:40:22.364223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.364350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.364378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.364469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.364495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.364620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.364646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.364758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.364785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.364860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.364886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.365002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.365034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.365174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.365200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.365278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.365320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.365416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.365442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.365529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.365557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.365682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.365708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.365790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.365817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.365927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.365954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.366049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.366076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.366197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.366223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.366316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.366344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.366427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.366454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.366532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.366558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.366640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.366667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.366762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.366789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.366886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.366913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.366995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.367022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.367108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.367134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.367218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.367244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.367327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.367355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.367443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.367469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.367559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.367587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.367714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.367741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.367837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.367863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.367951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.367978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.368063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.368091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.368179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.368205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.368314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.368354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.368460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.368488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.368582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.368608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.368689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.368715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.368827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.746 [2024-11-20 06:40:22.368853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.746 qpair failed and we were unable to recover it. 00:29:50.746 [2024-11-20 06:40:22.368940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.368966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.369055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.369081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.369168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.369194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.369284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.369323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.369409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.369435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.369516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.369541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.369621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.369646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.369746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.369772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.369865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.369897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.370049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.370114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.370217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.370247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.370354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.370382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.370492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.370518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.370621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.370669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.370751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.370778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.370864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.370890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.370965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.370991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.371129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.371160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.371256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.371283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.371406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.371434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.371525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.371552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.371644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.371671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.371765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.371791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.371901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.371928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.372021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.372047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.372137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.372163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.372256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.372282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.372373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.372400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.372480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.372507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.372625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.372652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.372743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.372770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.372864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.372891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.372969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.372995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.373079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.747 [2024-11-20 06:40:22.373106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.747 qpair failed and we were unable to recover it. 00:29:50.747 [2024-11-20 06:40:22.373200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.373226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.373326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.373371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.373488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.373515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.373603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.373630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.373735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.373761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.373847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.373873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.373959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.373986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.374102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.374136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.374250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.374276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.374409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.374436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.374519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.374545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.374636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.374663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.374769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.374796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.374909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.374934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.375047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.375075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.375186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.375213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.375328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.375356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.375472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.375499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.375612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.375639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.375773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.375798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.375886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.375913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.375999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.376028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.376155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.376182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.376313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.376341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.376428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.376454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.376566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.376593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.376703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.376730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.376839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.376865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.376959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.376992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.377102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.377129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.377221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.377247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.377327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.377354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.377467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.377493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.377579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.377608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.377701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.377727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.377821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.377847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.377927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.377954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.378066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.378093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.748 [2024-11-20 06:40:22.378224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.748 [2024-11-20 06:40:22.378251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.748 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.378338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.378365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.378476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.378501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.378584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.378616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.378759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.378785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.378918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.378944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.379028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.379056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.379185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.379212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.379298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.379333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.379415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.379443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.379527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.379554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.379633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.379661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.379811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.379838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.379924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.379950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.380054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.380082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.380177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.380208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.380324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.380350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.380461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.380487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.380586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.380617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.380697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.380723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.380810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.380836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.380938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.380964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.381049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.381076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.381185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.381212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.381300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.381335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.381447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.381474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.381566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.381595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.381699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.381726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.381859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.381885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.381971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.381998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.382092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.382124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.382213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.382239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.382355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.382382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.382508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.382533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.382627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.382654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.382774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.382799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.382918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.382943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.383034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.383061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.749 [2024-11-20 06:40:22.383160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.749 [2024-11-20 06:40:22.383187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.749 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.383295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.383330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.383448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.383474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.383582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.383609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.383726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.383752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.383845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.383871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.383981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.384008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.384102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.384129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.384218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.384245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.384330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.384357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.384466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.384492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.384592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.384619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.384733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.384759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.384831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.384857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.384949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.384977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.385063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.385091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.385188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.385214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.385343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.385382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.385480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.385508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.385649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.385676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.385770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.385802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.385919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.385945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.386040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.386068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.386186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.386213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.386312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.386339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.386427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.386454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.386539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.386567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.386657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.386683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.386780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.386806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.386924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.386974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.387069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.387096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.387182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.387209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.387299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.387334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.387436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.387467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.387593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.387622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.387704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.387730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.387812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.387838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.387943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.387969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.388112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.750 [2024-11-20 06:40:22.388139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.750 qpair failed and we were unable to recover it. 00:29:50.750 [2024-11-20 06:40:22.388224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.388250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.388344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.388370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.388455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.388482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.388580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.388607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.388691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.388717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.388832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.388858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.388946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.388971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.389073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.389099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.389193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.389223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.389314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.389340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.389438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.389463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.389541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.389570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.389720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.389746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.389880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.389906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.389990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.390016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.390107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.390133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.390223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.390248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.390392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.390419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.390511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.390541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.390636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.390662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.390774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.390800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.390914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.390940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.391030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.391061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.391163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.391190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.391285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.391319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.391421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.391447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.391536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.391567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.391710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.391736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.391850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.391876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.391987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.392019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.392116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.392143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.392276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.392308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.392407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.392434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.751 [2024-11-20 06:40:22.392545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.751 [2024-11-20 06:40:22.392575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.751 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.392651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.392677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.392765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.392795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.392880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.392907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.393004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.393030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.393151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.393177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.393268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.393294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.393429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.393455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.393572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.393599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.393734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.393759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.393836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.393862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.393948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.393979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.394080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.394106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.394180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.394206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.394318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.394353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.394472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.394499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.394614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.394640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.394782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.394808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.394895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.394921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.395014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.395041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.395152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.395178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.395280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.395313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.395402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.395429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.395544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.395570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.395660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.395687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.395797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.395823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.395946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.395974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.396056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.396082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.396178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.396204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.396289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.396326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.396418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.396445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.396541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.396567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.396707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.396733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.396849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.396881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.396980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.397006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.397092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.397117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.397224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.397250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.397336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.397363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.752 [2024-11-20 06:40:22.397443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.752 [2024-11-20 06:40:22.397469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.752 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.397555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.397581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.397672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.397698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.397818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.397849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.397961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.397987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.398130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.398156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.398245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.398278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.398445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.398484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.398604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.398641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.398737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.398763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.398847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.398874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.398985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.399012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.399122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.399149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.399232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.399259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.399366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.399393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.399508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.399541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.399684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.399717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.399857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.399889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.400022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.400066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.400204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.400232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.400316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.400351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.400544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.400571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.400717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.400750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.400857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.400902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.401029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.401062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.401215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.401249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.401372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.401399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.401516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.401542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.401625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.401652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.401767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.401802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.401959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.401992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.402104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.402138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.402273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.402317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.402433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.402459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.402547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.402574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.402720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.402748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.402826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.402853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.403002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.403035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.753 [2024-11-20 06:40:22.403140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.753 [2024-11-20 06:40:22.403174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.753 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.403283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.403316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.403410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.403436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.403518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.403544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.403648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.403681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.403785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.403833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.403962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.403997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.404182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.404216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.404367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.404395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.404513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.404539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.404633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.404660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.404772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.404805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.404916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.404948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.405083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.405116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.405252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.405286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.405401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.405428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.405541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.405568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.405669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.405695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.405778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.405804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.405964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.405997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.406123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.406164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.406310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.406337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.406462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.406489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.406576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.406604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.406685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.406712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.406844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.406877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.406973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.407007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.407151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.407186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.407299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.407366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.407492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.407518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.407646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.407672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.407777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.407803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.407884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.407911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.408016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.408049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.408190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.408224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.408332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.408377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.408454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.408481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.408588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.408614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.754 [2024-11-20 06:40:22.408707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.754 [2024-11-20 06:40:22.408733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.754 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.408851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.408877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.408988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.409014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.409144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.409176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.409320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.409365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.409444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.409471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.409556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.409597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.409716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.409742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.409833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.409860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.409989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.410016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.410114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.410140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.410253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.410281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.410405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.410431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.410522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.410548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.410673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.410710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.410877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.410909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.411013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.411047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.411210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.411246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.411370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.411396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.411519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.411546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.411647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.411674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.411826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.411862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.412002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.412042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.412265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.412320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.412440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.412467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.412594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.412630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.412801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.412837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.412951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.412996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.413169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.413205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.413372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.413399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.413484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.413512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.413636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.413663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.413755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.413782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.413967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.414002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.414119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.414154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.414298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.414330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.414483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.414509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.414594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.414621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.755 qpair failed and we were unable to recover it. 00:29:50.755 [2024-11-20 06:40:22.414714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.755 [2024-11-20 06:40:22.414740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.414958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.414994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.415198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.415233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.415407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.415434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.415515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.415541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.415656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.415682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.415772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.415798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.415911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.415937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.416024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.416050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.416157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.416208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.416363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.416391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.416509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.416535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.416678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.416714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.416838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.416865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.416975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.417002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.417095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.417122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.417205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.417231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.417330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.417357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.417470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.417497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.417639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.417673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.417826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.417860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.417991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.418025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.418164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.418198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.418300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.418366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.418460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.418491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.418632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.418659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.418774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.418801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.418956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.418991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.419109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.419144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.419291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.419350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.419452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.419485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.419629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.419661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.419851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.419902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.420053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.420086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.420262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.420295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.420440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.420474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.756 qpair failed and we were unable to recover it. 00:29:50.756 [2024-11-20 06:40:22.420572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.756 [2024-11-20 06:40:22.420623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.420817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.420851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.420989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.421023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.421137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.421172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.421359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.421393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.421509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.421543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.421645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.421679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.421800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.421836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.421956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.421992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.422175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.422210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.422346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.422381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.422498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.422531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.422714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.422750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.422885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.422920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.423038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.423072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.423239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.423276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.423416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.423449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.423549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.423582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.423690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.423723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.423838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.423874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.424039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.424072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.424261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.424295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.424441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.424474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.424572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.424624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.424803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.424840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.424990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.425026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.425144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.425180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.425287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.425359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.425502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.425557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.425743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.425779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.425931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.425966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.426098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.426132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.426248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.426282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.426476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.426509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.426623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.426659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.426804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.426840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.426957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.426994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.427113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.427150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.427269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.427315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.427452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.427485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.427594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.757 [2024-11-20 06:40:22.427626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.757 qpair failed and we were unable to recover it. 00:29:50.757 [2024-11-20 06:40:22.427787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.427823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.427968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.428004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.428138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.428171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.428316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.428350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.428448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.428481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.428572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.428605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.428770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.428825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.428969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.429006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.429117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.429155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.429285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.429349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.429458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.429490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.429620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.429656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.429778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.429814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.429927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.429963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.430153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.430204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.430321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.430358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.430462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.430496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.430601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.430634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.430756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.430791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.430933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.430969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.431086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.431125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.431260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.431292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.431439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.431472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.431578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.431612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.431714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.431747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.431882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.431915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.432086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.432141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.432284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.432325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.432439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.432472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.432569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.432603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.432731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.432786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.432944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.432982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.433096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.433133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.433248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.433286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.433460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.433493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.433622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.433655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.433753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.433787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.433894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.433927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.434088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.434121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.434280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.434323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.434471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.758 [2024-11-20 06:40:22.434504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.758 qpair failed and we were unable to recover it. 00:29:50.758 [2024-11-20 06:40:22.434675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.434712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.434854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.434907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.435054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.435092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.435246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.435283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.435440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.435473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.435597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.435635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.435751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.435788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.435920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.435958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.436074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.436112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.436242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.436291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.436419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.436452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.436595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.436629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.436776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.436813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.436926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.436982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.437128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.437167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.437315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.437350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.437485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.437519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.437681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.437718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.437859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.437897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.438010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.438048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.438188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.438226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.438366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.438401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.438514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.438547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.438685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.438720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.438875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.438914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.439086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.439124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.439274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.439324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.439456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.439489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.439593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.439645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.439795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.439833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.439963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.440002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.440146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.440184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.440352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.440385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.440521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.440554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.440714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.440751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.440864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.440902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.441042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.441080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.441223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.441261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.441400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.441434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.441524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.441557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.759 [2024-11-20 06:40:22.441711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.759 [2024-11-20 06:40:22.441749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.759 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.441865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.441903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.442055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.442093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.442210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.442249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.442391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.442425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.442540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.442573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.442700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.442738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.442880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.442931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.443089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.443139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.443248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.443285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.443441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.443474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.443599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.443632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.443758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.443810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.443954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.443997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.444117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.444177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.444325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.444378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.444489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.444522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.444629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.444663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.444871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.444904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.445032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.445070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.445203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.445238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.445348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.445381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.445525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.445558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.445716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.445754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.445902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.445940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.446082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.446119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.446247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.446297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.446446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.446480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.446583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.446616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.446725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.446779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.446884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.446921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.447037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.447097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.447251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.447286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.447418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.447452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.447552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.447604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.447729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.447766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.447916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.760 [2024-11-20 06:40:22.447954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.760 qpair failed and we were unable to recover it. 00:29:50.760 [2024-11-20 06:40:22.448096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.448133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.448300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.448365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.448502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.448551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.448685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.448725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.448857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.448911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.449074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.449111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.449237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.449271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.449376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.449409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.449509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.449542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.449733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.449771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.449928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.449979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.450131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.450170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.450340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.450375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.450519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.450552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.450673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.450711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.450853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.450890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.451039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.451085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.451205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.451257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.451403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.451436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.451569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.451625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.451780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.451820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.451978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.452019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.452154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.452194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.452351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.452396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.452497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.452530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.452654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.452687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.452854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.452887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.453039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.453079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.453253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.453286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.453411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.453444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.761 [2024-11-20 06:40:22.453551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.761 [2024-11-20 06:40:22.453584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.761 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.453693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.453726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.453881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.453913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.454068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.454108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.454254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.454294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.454451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.454484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.454594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.454627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.454777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.454817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.454930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.454970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.455102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.455141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.455270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.455343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.455485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.455518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.455633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.455684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.455851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.455892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.456016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.456056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.456225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.456265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.456439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.456473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.456602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.456642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.456812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.456852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.456998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.457038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.457211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.457243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.457378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.457412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.457514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.457547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.457651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.457684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.457839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.457878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.458031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.458086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.458272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.458350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.458495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.458529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.458662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.458697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.458861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.458900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.459025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.459065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.459216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.459269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.459495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.459529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.459695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.459734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.459876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.459916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.460036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.460075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.460219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.460252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.460376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.460410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.762 [2024-11-20 06:40:22.460516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.762 [2024-11-20 06:40:22.460549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.762 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.460729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.460778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.460915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.460967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.461132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.461173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.461300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.461365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.461467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.461500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.461609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.461642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.461818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.461858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.461977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.462016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.462177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.462216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.462391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.462425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.462557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.462590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.462736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.462775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.462900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.462940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.463109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.463149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.463285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.463347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.463456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.463491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.463647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.463705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.463820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.463859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.463991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.464031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.464191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.464232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.464394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.464429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.464567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.464601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.464737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.464792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.464909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.464962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.465131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.465183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.465330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.465364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.465506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.465539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.465685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.465741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.465903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.465945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.466096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.466155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.466292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.466357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.466464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.466497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.466663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.466703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.466865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.466905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.467072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.467111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.467253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.467286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.467440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.467473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.763 qpair failed and we were unable to recover it. 00:29:50.763 [2024-11-20 06:40:22.467635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.763 [2024-11-20 06:40:22.467668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.467769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.467826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.468020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.468062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.468198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.468238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.468426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.468459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.468592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.468625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.468798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.468838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.468965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.469005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.469119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.469160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.469290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.469349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.469500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.469533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.469671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.469705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.469898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.469949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.470116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.470158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.470325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.470359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.470490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.470523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.470659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.470713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.470884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.470924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.471059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.471100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.471249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.471290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.471432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.471466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.471629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.471670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.471796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.471838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.471977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.472019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.472197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.472230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.472346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.472380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.472490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.472524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.472687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.472727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.472914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.472954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.473116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.473156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.473290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.473366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.473465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.473498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.473643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.473676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.473808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.473863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.473991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.474031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.474159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.474212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.474333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.474366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.474457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.764 [2024-11-20 06:40:22.474491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.764 qpair failed and we were unable to recover it. 00:29:50.764 [2024-11-20 06:40:22.474643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.474691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.474860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.474900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.475107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.475147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.475278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.475350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.475482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.475515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.475663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.475704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.475903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.475944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.476071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.476114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.476279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.476351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.476495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.476527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.476620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.476654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.476789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.476821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.477001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.477042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.477211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.477246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.477362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.477396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.477512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.477545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.477653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.477686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.477852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.477893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.478011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.478052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.478260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.478315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.478436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.478469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.478626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.478668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.478791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.478833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.478971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.479013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.479150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.479182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.479285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.479345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.479477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.479510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.479691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.479732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.479908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.479941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.480084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.480125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.480292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.480335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.480453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.480487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.765 qpair failed and we were unable to recover it. 00:29:50.765 [2024-11-20 06:40:22.480599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.765 [2024-11-20 06:40:22.480661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.480830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.480872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.481005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.481047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.481210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.481251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.481415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.481449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.481592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.481647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.481781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.481822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.481992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.482048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.482215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.482257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.482406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.482440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.482601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.482634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.482773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.482815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.482951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.482992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.483158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.483200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.483378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.483429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.483567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.483599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.483761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.483802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.483971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.484012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.484164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.484196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.484337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.484373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.484484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.484517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.484667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.484709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.484931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.484973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.485095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.485138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.485285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.485351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.485501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.485534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.485648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.485703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.485879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.485922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.486049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.486091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.486261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.486294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.486406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.486441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.486558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.486592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.486720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.486752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.486961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.487003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.487161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.487203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.487335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.487385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.487493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.487527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.487656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.766 [2024-11-20 06:40:22.487689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.766 qpair failed and we were unable to recover it. 00:29:50.766 [2024-11-20 06:40:22.487824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.487859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.488010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.488052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.488188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.488227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.488343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.488378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.488512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.488545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.488658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.488712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.488872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.488913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.489036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.489079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.489271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.489323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.489482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.489515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.489658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.489700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.489830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.489874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.490035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.490077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.490228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.490278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.490402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.490435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.490542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.490576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.490730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.490763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.490887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.490941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.491085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.491128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.491265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.491318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.491478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.491511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.491683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.491725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.491877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.491919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.492034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.492075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.492231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.492285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.492442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.492475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.492613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.492672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.492844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.492886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.493061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.493103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.493245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.493288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.493437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.493470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.493632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.493674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.493900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.493941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.494074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.494117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.494280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.494351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.494466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.494500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.494636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.494669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.494791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.767 [2024-11-20 06:40:22.494832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.767 qpair failed and we were unable to recover it. 00:29:50.767 [2024-11-20 06:40:22.494994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.495036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.495214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.495247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.495363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.495398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.495488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.495521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.495656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.495706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.495855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.495888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.496064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.496107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.496263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.496319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.496507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.496540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.496708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.496749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.496900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.496934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.497148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.497191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.497367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.497401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.497527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.497560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.497731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.497772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.497923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.497981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.498130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.498172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.498381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.498415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.498530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.498563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.498742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.498784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.498926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.498968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.499094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.499135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.499295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.499379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.499516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.499549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.499662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.499704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.499896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.499938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.500094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.500136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.500288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.500350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.500493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.500526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.500627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.500662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.500778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.500811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.500976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.501030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.501160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.501203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.501354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.501389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.501552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.501585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.501701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.501744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.501912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.501954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.502117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.768 [2024-11-20 06:40:22.502158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:50.768 qpair failed and we were unable to recover it. 00:29:50.768 [2024-11-20 06:40:22.502362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.502416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.502559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.502594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.502705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.502738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.502875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.502908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.503023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.503080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.503283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.503328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.503441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.503474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.503581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.503620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.503812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.503855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.504040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.504083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.504252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.504295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.504451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.504490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.504616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.504649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.504766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.504798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.504922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.504954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.505138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.505183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.505328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.505382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.505554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.505587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.505718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.505779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.505977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.506021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.506205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.506248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.506430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.506464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.506593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.506628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.506732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.506764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.506894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.506926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.507055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.507087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.507200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.507240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.507378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.507411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.507502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.507535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.507632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.507688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.507871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.507922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.508071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.508114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.508266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.508324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.508489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.508521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.508638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.508678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.508904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.508948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.509145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.509187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.509356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.769 [2024-11-20 06:40:22.509389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.769 qpair failed and we were unable to recover it. 00:29:50.769 [2024-11-20 06:40:22.509546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.509579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.509687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.509719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.509847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.509879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.509993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.510052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.510210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.510243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.510377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.510410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.510550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.510583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.510683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.510739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.510909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.510952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.511120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.511175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.511356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.511390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.511509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.511549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.511656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.511689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.511824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.511866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.512030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.512073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.512210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.512272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.512424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.512458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.512594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.512627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.512758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.512793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.512988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.513033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.513200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.513243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.513401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.513435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.513569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.513600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.513770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.513812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.513985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.514029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.514203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.514246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.514455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.514492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.514710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.514766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.514967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.515010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.515208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.515255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.515457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.515489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.515622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.515653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.515766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.515797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.515976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.770 [2024-11-20 06:40:22.516008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.770 qpair failed and we were unable to recover it. 00:29:50.770 [2024-11-20 06:40:22.516196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.516238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.516390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.516422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.516527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.516566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.516757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.516801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.516992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.517034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.517208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.517250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.517402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.517435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.517549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.517582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.517717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.517748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.517869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.517901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.518004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.518035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.518175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.518218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.518423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.518456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.518559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.518590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.518727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.518762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.518921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.518976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.519138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.519192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.519381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.519415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.519550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.519581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.519700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.519741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.519902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.519945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.520095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.520136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.520350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.520382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.520488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.520526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.520665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.520697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.520805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.520835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.520945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.521001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.521160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.521194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:50.771 [2024-11-20 06:40:22.521288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:50.771 [2024-11-20 06:40:22.521329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:50.771 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.521445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.521484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.521612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.521657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.521799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.521833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.522050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.522116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.522264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.522329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.522446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.522481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.522643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.522687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.522829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.522873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.523046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.523091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.523227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.523271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.523480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.523514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.523624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.523658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.523767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.523846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.523982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.524058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.524209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.524253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.524407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.524443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.524566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.524599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.524710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.524742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.524853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.524889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.052 qpair failed and we were unable to recover it. 00:29:51.052 [2024-11-20 06:40:22.525078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.052 [2024-11-20 06:40:22.525122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.525266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.525318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.525446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.525477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.525592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.525649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.525825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.525870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.526034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.526077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.526246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.526288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.526439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.526470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.526573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.526610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.526731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.526776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.526915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.526958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.527129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.527172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.527342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.527396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.527530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.527570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.527731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.527772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.527911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.527953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.528129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.528172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.528356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.528396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.528501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.528532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.528671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.528703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.528828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.528860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.528974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.529028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.529229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.529295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.529486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.529522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.529633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.529667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.529875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.529918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.530097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.530140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.530324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.530385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.530522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.530556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.530683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.053 [2024-11-20 06:40:22.530741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.053 qpair failed and we were unable to recover it. 00:29:51.053 [2024-11-20 06:40:22.530914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.530957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.531101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.531161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.531299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.531343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.531451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.531484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.531644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.531676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.531830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.531870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.532061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.532105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.532241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.532285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.532438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.532472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.532607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.532639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.532780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.532836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.532976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.533019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.533165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.533225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.533362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.533395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.533529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.533563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.533748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.533792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.533940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.533991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.534176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.534223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.534405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.534438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.534554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.534586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.534745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.534801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.534946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.534990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.535131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.535174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.535323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.535376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.535491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.535525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.535660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.535693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.535857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.535902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.536074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.536118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.054 qpair failed and we were unable to recover it. 00:29:51.054 [2024-11-20 06:40:22.536276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.054 [2024-11-20 06:40:22.536316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.536428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.536460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.536615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.536659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.536879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.536912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.537131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.537176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.537358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.537392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.537525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.537557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.537663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.537722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.537870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.537914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.538054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.538086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.538266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.538321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.538459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.538491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.538601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.538635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.538776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.538831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.538962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.539005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.539174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.539208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.539337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.539370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.539507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.539546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.539677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.539723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.539889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.539933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.540103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.540146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.540291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.540363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.540528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.540578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.540763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.540807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.540991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.541035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.541194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.541227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.541338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.541372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.541482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.541515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.541654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.055 [2024-11-20 06:40:22.541687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.055 qpair failed and we were unable to recover it. 00:29:51.055 [2024-11-20 06:40:22.541870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.541928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.542091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.542136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.542275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.542343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.542481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.542515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.542706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.542749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.542924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.542967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.543107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.543150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.543327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.543379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.543522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.543555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.543667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.543699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.543864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.543909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.544042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.544088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.544248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.544280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.544408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.544442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.544576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.544610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.544775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.544808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.544961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.545007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.545172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.545219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.545388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.545421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.545551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.545583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.545742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.545789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.545936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.545986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.546139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.546184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.546388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.546421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.546590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.546637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.546760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.546808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.546951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.546996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.547170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.547216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.547398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.547437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.547555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.547608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.547761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.547810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.548040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.548087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.548247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.548292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.548458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.056 [2024-11-20 06:40:22.548491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.056 qpair failed and we were unable to recover it. 00:29:51.056 [2024-11-20 06:40:22.548658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.548704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.548863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.548912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.549103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.549150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.549316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.549373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.549487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.549519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.549665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.549698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.549810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.549869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.550029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.550075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.550231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.550276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.550418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.550451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.550557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.550590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.550698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.550751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.550914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.550961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.551111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.551144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.551247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.551279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.551424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.551458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.551569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.551622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.551763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.551823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.552035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.552082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.552230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.552277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.552441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.552474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.552614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.552684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.552876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.552925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.553157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.553206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.553419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.553455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.553588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.553621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.553756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.553802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.554018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.554063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.554204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.554244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.057 qpair failed and we were unable to recover it. 00:29:51.057 [2024-11-20 06:40:22.554392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.057 [2024-11-20 06:40:22.554426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.554562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.554593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.554754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.554805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.554987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.555031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.555242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.555292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.555457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.555490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.555685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.555735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.555882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.555927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.556131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.556181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.556437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.556472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.556583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.556615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.556774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.556806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.556913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.556952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.557135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.557181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.557394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.557428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.557566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.557598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.557729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.557772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.557967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.558027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.558275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.558342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.558507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.558539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.558703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.558749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.558939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.558985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.559143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.559198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.559394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.559460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.559663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.559709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.559895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.559947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.058 [2024-11-20 06:40:22.560129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.058 [2024-11-20 06:40:22.560175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.058 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.560358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.560396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.560537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.560568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.560689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.560720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.560853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.560885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.561078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.561125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.561312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.561371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.561509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.561541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.561642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.561680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.561871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.561918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.562130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.562175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.562364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.562397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.562512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.562544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.562641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.562673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.562783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.562814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.562937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.562969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.563084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.563141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.563342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.563375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.563483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.563515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.563622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.563653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.563754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.563816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.564023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.564070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.564230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.564275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.564458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.564491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.564640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.564672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.564771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.564828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.564965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.565011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.565185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.565245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.565428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.565461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.565559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.565591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.565748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.565780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.565949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.565994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.566137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.566182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.059 [2024-11-20 06:40:22.566370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.059 [2024-11-20 06:40:22.566404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.059 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.566521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.566560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.566743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.566791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.566969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.567013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.567155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.567201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.567388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.567429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.567574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.567605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.567710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.567741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.567912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.567944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.568094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.568152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.568370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.568404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.568505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.568536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.568667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.568698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.568876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.568935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.569093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.569145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.569339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.569392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.569502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.569534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.569708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.569756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.569935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.569981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.570170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.570216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.570421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.570456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.570598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.570631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.570759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.570791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.570928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.570960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.571131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.571182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.571378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.571412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.571517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.571549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.571684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.571717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.571921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.571968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.572156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.572201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.572389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.572423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.572586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.572620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.572726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.572758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.572923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.060 [2024-11-20 06:40:22.572956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.060 qpair failed and we were unable to recover it. 00:29:51.060 [2024-11-20 06:40:22.573101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.573150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.573321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.573355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.573473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.573506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.573651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.573683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.573818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.573905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.574090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.574137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.574323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.574378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.574503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.574535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.574687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.574721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.574882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.574914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.575016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.575048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.575159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.575196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.575365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x206df30 is same with the state(6) to be set 00:29:51.061 [2024-11-20 06:40:22.575571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.575621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.575765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.575831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.575988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.576038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.576200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.576234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.576347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.576383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.576498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.576533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.576659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.576693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.576801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.576834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.576969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.577015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.577137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.577184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.577352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.577386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.577529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.577563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.577794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.577827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.578032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.578078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.578216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.578262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.578433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.578468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.578604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.578638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.578775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.061 [2024-11-20 06:40:22.578808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.061 qpair failed and we were unable to recover it. 00:29:51.061 [2024-11-20 06:40:22.578919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.578952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.579116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.579149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.579262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.579296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.579447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.579481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.579597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.579631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.579814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.579848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.580075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.580107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.580250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.580297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.580477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.580510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.580612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.580666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.580820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.580873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.581060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.581106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.581266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.581337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.581483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.581542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.581732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.581780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.581940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.581988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.582191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.582238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.582430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.582465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.582583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.582616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.582752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.582812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.583000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.583058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.583201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.583234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.583343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.583377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.583519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.583553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.583715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.583766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.583932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.583979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.584171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.584224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.584413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.584447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.584613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.584663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.584888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.584939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.585119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.062 [2024-11-20 06:40:22.585180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.062 qpair failed and we were unable to recover it. 00:29:51.062 [2024-11-20 06:40:22.585408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.585442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.585581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.585614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.585747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.585780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.585932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.585978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.586156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.586217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.586352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.586393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.586495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.586528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.586673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.586720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.586904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.586968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.587121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.587168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.587344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.587377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.587490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.587524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.587664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.587723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.587896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.587945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.588145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.588194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.588388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.588423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.588526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.588559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.588715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.588761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.588956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.588989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.589131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.589177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.589370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.589403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.589545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.589578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.589686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.589719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.589919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.589965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.590172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.590229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.590374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.590408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.590574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.063 [2024-11-20 06:40:22.590644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.063 qpair failed and we were unable to recover it. 00:29:51.063 [2024-11-20 06:40:22.590843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.590892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.591096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.591146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.591329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.591381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.591521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.591554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.591663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.591704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.591873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.591919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.592109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.592155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.592350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.592384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.592491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.592531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.592717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.592765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.592962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.593011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.593227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.593300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.593467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.593502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.593675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.593736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.593975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.594022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.594200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.594258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.594410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.594444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.594581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.594614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.594745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.594795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.594942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.594989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.595138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.595185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.595371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.595405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.595526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.595559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.595696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.595744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.595907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.595961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.596186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.596236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.596469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.596517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.596685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.596720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.596855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.596888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.597055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.597113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.597284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.597366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.597482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.064 [2024-11-20 06:40:22.597515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.064 qpair failed and we were unable to recover it. 00:29:51.064 [2024-11-20 06:40:22.597650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.597712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.597928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.597979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.598171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.598220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.598417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.598453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.598566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.598600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.598758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.598808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.598970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.599020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.599207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.599257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.599480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.599513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.599647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.599681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.599796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.599829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.599990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.600023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.600180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.600213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.600337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.600372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.600489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.600523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.600657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.600724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.600894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.600943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.601105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.601156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.601361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.601395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.601556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.601589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.601747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.601798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.601969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.602021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.602224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.602274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.602468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.602501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.602668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.602701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.602830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.602880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.603132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.603195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.603322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.603355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.603489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.603524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.603718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.603767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.603946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.603996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.604203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.604236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.065 [2024-11-20 06:40:22.604342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.065 [2024-11-20 06:40:22.604376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.065 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.604510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.604545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.604678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.604717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.604855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.604906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.605083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.605133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.605315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.605371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.605501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.605534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.605695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.605728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.605887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.605936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.606149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.606199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.606378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.606414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.606551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.606584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.606754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.606817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.607023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.607084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.607250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.607329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.607477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.607512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.607644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.607677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.607816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.607849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.607982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.608035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.608209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.608258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.608443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.608478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.608647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.608698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.608860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.608909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.609066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.609117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.609345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.609380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.609514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.609549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.609651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.609702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.609892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.609940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.610119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.610169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.610375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.610409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.610514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.610547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.066 qpair failed and we were unable to recover it. 00:29:51.066 [2024-11-20 06:40:22.610650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.066 [2024-11-20 06:40:22.610682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.610867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.610916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.611103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.611152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.611390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.611424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.611602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.611651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.611874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.611929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.612112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.612162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.612358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.612391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.612503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.612535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.612670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.612703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.612814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.612869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.613030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.613069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.613252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.613317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.613479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.613513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.613658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.613707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.613875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.613927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.614119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.614171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.614398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.614432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.614533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.614566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.614699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.614748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.614900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.614949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.615102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.615151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.615356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.615389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.615553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.615586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.615717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.615766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.615992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.616041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.616185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.616245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.616362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.616396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.616535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.616568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.616751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.616800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.616969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.617003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.617175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.617225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.617416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.617450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.617554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.067 [2024-11-20 06:40:22.617588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.067 qpair failed and we were unable to recover it. 00:29:51.067 [2024-11-20 06:40:22.617774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.617824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.618028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.618077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.618241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.618274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.618394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.618428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.618547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.618581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.618685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.618720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.618912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.618961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.619150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.619212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.619328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.619363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.619465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.619499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.619671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.619720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.619908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.619941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.620121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.620171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.620372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.620406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.620570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.620602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.620754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.620811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.620978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.621027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.621173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.621230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.621426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.621460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.621564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.621599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.621735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.621767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.621872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.621926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.622082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.622131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.622340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.622373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.622510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.622544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.622697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.622747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.622924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.622973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.623165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.623214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.623433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.068 [2024-11-20 06:40:22.623467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.068 qpair failed and we were unable to recover it. 00:29:51.068 [2024-11-20 06:40:22.623578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.623611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.623775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.623837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.624015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.624076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.624273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.624351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.624505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.624538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.624681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.624734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.624936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.624985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.625209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.625258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.625457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.625491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.625614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.625647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.625758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.625792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.625980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.626030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.626176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.626225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.626414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.626448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.626561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.626594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.626741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.626774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.626883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.626939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.627173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.627221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.627427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.627461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.627571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.627624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.627866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.627915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.628119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.628168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.628369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.628403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.628620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.628653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.628811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.628844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.628974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.629026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.629277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.069 [2024-11-20 06:40:22.629316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.069 qpair failed and we were unable to recover it. 00:29:51.069 [2024-11-20 06:40:22.629435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.629469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.629584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.629639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.629834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.629882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.630072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.630130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.630300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.630348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.630459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.630492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.630631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.630664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.630882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.630928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.631084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.631150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.631319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.631381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.631546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.631579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.631753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.631804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.631959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.632008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.632242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.632274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.632423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.632457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.632602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.632636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.632826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.632872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.633086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.633161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.633358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.633393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.633559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.633617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.633783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.633846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.634031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.634081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.634342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.634376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.634484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.634517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.634628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.634661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.634800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.634861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.635058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.635107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.635346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.635380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.635528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.635561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.635820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.635869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.636038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.636071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.636315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.636370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.070 [2024-11-20 06:40:22.636612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.070 [2024-11-20 06:40:22.636661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.070 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.636859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.636892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.637026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.637059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.637280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.637357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.637464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.637496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.637637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.637670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.637834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.637880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.638109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.638158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.638328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.638379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.638540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.638580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.638729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.638762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.638924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.638973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.639126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.639176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.639384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.639417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.639516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.639549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.639752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.639802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.640026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.640075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.640271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.640331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.640559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.640608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.640790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.640822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.640931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.640964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.641163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.641212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.641413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.641447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.641562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.641595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.641741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.641792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.641944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.641995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.642196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.642246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.642433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.642466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.642610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.642643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.642754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.642810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.642976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.643025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.643211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.643260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.643414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.643448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.071 [2024-11-20 06:40:22.643560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.071 [2024-11-20 06:40:22.643610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.071 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.643762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.643813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.644052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.644102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.644360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.644394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.644537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.644570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.644666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.644699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.644876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.644925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.645087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.645136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.645285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.645354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.645517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.645550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.645650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.645684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.645799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.645831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.646004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.646053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.646205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.646256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.646466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.646500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.646620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.646653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.646817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.646856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.647017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.647069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.647285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.647361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.647493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.647527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.647650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.647703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.647898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.647948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.648149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.648199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.648360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.648412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.648596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.648648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.648834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.648883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.649105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.649155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.649345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.649395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.649594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.649641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.649821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.649885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.072 [2024-11-20 06:40:22.650082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.072 [2024-11-20 06:40:22.650132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.072 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.650379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.650433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.650633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.650666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.650770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.650803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.650939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.650972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.651105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.651170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.651411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.651465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.651629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.651684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.651928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.651981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.652175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.652229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.652453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.652510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.652723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.652773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.652938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.652990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.653225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.653276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.653484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.653534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.653675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.653725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.653918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.653968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.654155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.654203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.654398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.654449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.654669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.654721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.654968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.655001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.655176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.655208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.655435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.655488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.655701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.655754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.655966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.656000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.656112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.656145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.656341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.656404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.656605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.656659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.656850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.656902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.657120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.657173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.657364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.657419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.657579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.657631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.657827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.073 [2024-11-20 06:40:22.657880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.073 qpair failed and we were unable to recover it. 00:29:51.073 [2024-11-20 06:40:22.658036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.658090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.658277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.658344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.658532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.658585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.658793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.658847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.659036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.659088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.659231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.659284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.659514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.659567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.659734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.659786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.659984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.660037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.660227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.660282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.660553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.660607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.660843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.660895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.661128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.661181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.661391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.661445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.661646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.661699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.661888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.661940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.662101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.662153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.662395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.662449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.662662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.662714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.662873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.662927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.663151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.663205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.663391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.663445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.663629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.663681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.663852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.663907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.664111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.664165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.074 [2024-11-20 06:40:22.664403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.074 [2024-11-20 06:40:22.664457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.074 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.664652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.664706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.664873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.664925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.665120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.665172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.665340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.665396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.665567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.665620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.665862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.665915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.666123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.666175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.666417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.666480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.666682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.666736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.666954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.667006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.667206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.667261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.667434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.667489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.667727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.667780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.667974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.668027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.668229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.668282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.668537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.668590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.668783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.668836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.668992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.669045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.669238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.669290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.669528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.669581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.669747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.669802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.670036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.670089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.670298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.670358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.670599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.670652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.670824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.670877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.671064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.671119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.671373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.671443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.671609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.671663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.671813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.671866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.672049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.672101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.672331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.672385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.672599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.672653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.672886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.672938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.673123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.673175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.075 [2024-11-20 06:40:22.673389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.075 [2024-11-20 06:40:22.673445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.075 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.673659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.673712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.673880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.673934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.674141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.674194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.674403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.674457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.674659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.674711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.674863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.674915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.675088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.675140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.675341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.675396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.675545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.675598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.675800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.675854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.676059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.676111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.676278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.676346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.676544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.676605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.676799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.676852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.677072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.677119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.677299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.677380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.677564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.677617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.677854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.677906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.678143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.678197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.678424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.678499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.678753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.678824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.679007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.679061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.679226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.679278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.679519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.679590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.679848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.679918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.680127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.680179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.680395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.680468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.680663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.680733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.680925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.680994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.681173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.681227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.681432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.681506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.681685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.681757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.681942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.681996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.076 [2024-11-20 06:40:22.682196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.076 [2024-11-20 06:40:22.682252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.076 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.682473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.682527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.682758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.682810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.683046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.683098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.683270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.683336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.683558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.683628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.683856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.683927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.684107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.684159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.684361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.684415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.684633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.684704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.684901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.684974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.685174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.685227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.685493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.685565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.685825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.685897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.686103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.686155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.686355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.686408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.686634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.686705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.686879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.686956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.687156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.687210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.687429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.687492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.687724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.687795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.688001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.688054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.688258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.688319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.688581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.688663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.688875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.688946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.689181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.689233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.689477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.689549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.689762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.689833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.690036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.690088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.690288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.690356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.077 qpair failed and we were unable to recover it. 00:29:51.077 [2024-11-20 06:40:22.690553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.077 [2024-11-20 06:40:22.690626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.690863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.690931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.691089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.691143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.691390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.691462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.691719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.691790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.692010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.692079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.692274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.692337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.692521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.692592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.692824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.692877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.693111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.693164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.693403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.693475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.693762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.693834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.694076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.694127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.694345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.694399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.694641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.694697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.694962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.695033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.695272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.695338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.695534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.695607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.695867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.695938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.696147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.696199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.696396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.696471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.696747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.696820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.697054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.697108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.697347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.697401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.697623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.697693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.697926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.697997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.698204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.698256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.698516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.698563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.698775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.698822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.699021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.699083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.699282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.699346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.078 [2024-11-20 06:40:22.699579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.078 [2024-11-20 06:40:22.699655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.078 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.699836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.699908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.700145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.700198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.700378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.700430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.700691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.700761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.700914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.700966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.701146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.701198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.701401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.701475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.701740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.701810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.702046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.702094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.702278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.702335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.702503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.702549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.702699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.702762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.702995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.703066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.703276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.703337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.703523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.703593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.703834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.703906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.704143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.704196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.704419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.704490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.704724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.704796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.705056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.705127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.705312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.705366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.705598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.705675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.705911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.705985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.706195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.706248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.706446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.706519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.706710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.706782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.706989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.079 [2024-11-20 06:40:22.707043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.079 qpair failed and we were unable to recover it. 00:29:51.079 [2024-11-20 06:40:22.707240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.707292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.707507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.707560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.707799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.707852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.708050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.708102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.708300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.708363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.708575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.708628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.708893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.708964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.709161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.709214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.709477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.709534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.709825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.709895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.710071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.710124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.710286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.710356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.710565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.710637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.710876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.710928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.711143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.711195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.711433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.711506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.711774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.711844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.711992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.712044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.712213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.712269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.712485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.712558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.712811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.712865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.713030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.713083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.713261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.713326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.713575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.713628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.713838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.713891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.714103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.714157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.714368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.714422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.714622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.714674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.714875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.714930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.715145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.715200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.080 [2024-11-20 06:40:22.715410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.080 [2024-11-20 06:40:22.715463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.080 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.715703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.715756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.715953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.716006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.716200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.716252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.716520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.716574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.716756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.716833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.717063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.717115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.717300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.717389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.717611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.717657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.717875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.717921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.718059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.718104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.718321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.718369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.718496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.718542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.718721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.718767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.719032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.719085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.719280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.719348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.719561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.719614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.719874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.719945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.720115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.720168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.720385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.720459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.720687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.720740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.720964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.721035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.721283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.721346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.721608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.721678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.721948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.722019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.722224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.722276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.722500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.722573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.722799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.722871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.723110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.723182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.723418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.723490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.723721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.723795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.081 [2024-11-20 06:40:22.723980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.081 [2024-11-20 06:40:22.724052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.081 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.724259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.724340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.724600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.724674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.724963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.725034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.725207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.725259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.725453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.725527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.725716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.725767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.725941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.725995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.726230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.726283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.726571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.726642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.726829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.726900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.727112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.727165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.727354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.727408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.727623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.727699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.727898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.727974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.728134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.728187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.728397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.728459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.728741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.728813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.729013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.729066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.729270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.729333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.729603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.729679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.729910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.729981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.730185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.730238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.730467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.730541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.730820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.730892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.731109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.731161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.731368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.731424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.731664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.731734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.731976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.732047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.732245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.082 [2024-11-20 06:40:22.732297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.082 qpair failed and we were unable to recover it. 00:29:51.082 [2024-11-20 06:40:22.732615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.732688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.732963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.733034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.733236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.733289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.733540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.733612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.733815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.733886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.734045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.734098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.734317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.734372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.734552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.734633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.734873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.734945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.735119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.735172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.735409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.735482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.735726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.735779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.735966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.736038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.736245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.736297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.736509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.736555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.736690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.736736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.736940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.736992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.737184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.737236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.737443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.737497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.737664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.737719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.737918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.737970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.738126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.738178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.738393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.738469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.738726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.738772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.739008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.739061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.739269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.739338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.739582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.739663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.739945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.740017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.740215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.740267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.740552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.740624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.740902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.740972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.741181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.741233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.741473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.083 [2024-11-20 06:40:22.741528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.083 qpair failed and we were unable to recover it. 00:29:51.083 [2024-11-20 06:40:22.741797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.741868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.742078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.742131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.742277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.742346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.742610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.742680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.742961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.743036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.743212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.743265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.743486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.743560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.743821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.743893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.744108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.744161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.744364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.744418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.744701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.744773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.745006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.745078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.745319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.745372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.745653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.745724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.746005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.746076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.746274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.746339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.746567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.746639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.746915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.746987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.747174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.747226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.747444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.747516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.747676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.747728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.747919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.747972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.748174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.748226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.748508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.748564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.748760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.748812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.749014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.749066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.749246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.749300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.749486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.749540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.749819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.749891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.750094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.750146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.750367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.750445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.084 [2024-11-20 06:40:22.750689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.084 [2024-11-20 06:40:22.750744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.084 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.751019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.751091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.751262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.751339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.751585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.751656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.751880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.751954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.752109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.752163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.752406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.752477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.752756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.752827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.753069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.753122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.753328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.753382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.753619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.753689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.753957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.754029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.754290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.754377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.754621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.754676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.754949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.755022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.755266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.755335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.755542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.755622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.755897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.755967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.756188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.756241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.756508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.756581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.756827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.756880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.757120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.757173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.757345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.757399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.757645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.757717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.757951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.758023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.758248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.085 [2024-11-20 06:40:22.758294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.085 qpair failed and we were unable to recover it. 00:29:51.085 [2024-11-20 06:40:22.758495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.758542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.758784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.758855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.759090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.759143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.759374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.759451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.759702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.759772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.759959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.760034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.760267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.760331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.760584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.760656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.760940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.760986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.761162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.761226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.761431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.761505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.761794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.761866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.762106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.762160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.762382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.762457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.762643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.762696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.762873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.762926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.763122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.763184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.763357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.763410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.763608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.763661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.763865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.763917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.764122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.764174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.764349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.764418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.764638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.764709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.764962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.765015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.765229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.765284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.765550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.765622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.765867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.765937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.766146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.766199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.766473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.766545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.766756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.766828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.767058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.086 [2024-11-20 06:40:22.767112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.086 qpair failed and we were unable to recover it. 00:29:51.086 [2024-11-20 06:40:22.767327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.767380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.767615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.767688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.767923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.767995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.768165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.768219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.768538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.768613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.768849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.768920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.769089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.769142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.769344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.769400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.769637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.769709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.769907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.769960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.770206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.770258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.770498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.770572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.770830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.770883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.771082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.771136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.771396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.771471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.771663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.771734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.771940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.771992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.772173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.772226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.772408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.772461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.772660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.772712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.772920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.772973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.773127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.773183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.773370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.773426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.773623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.773676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.773925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.773979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.774218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.774284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.774527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.774599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.774864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.774935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.775106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.775160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.775388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.775466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.775714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.087 [2024-11-20 06:40:22.775786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.087 qpair failed and we were unable to recover it. 00:29:51.087 [2024-11-20 06:40:22.775938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.775990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.776199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.776251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.776499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.776572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.776765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.776840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.777018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.777073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.777280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.777346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.777587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.777662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.777875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.777946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.778192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.778244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.778466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.778538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.778811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.778881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.779087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.779138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.779407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.779480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.779765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.779837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.780031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.780084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.780241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.780294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.780539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.780593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.780805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.780877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.781068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.781115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.781327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.781375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.781579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.781652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.781955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.782026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.782232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.782285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.782513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.782584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.782851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.782923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.783168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.783222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.783518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.783591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.783825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.783897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.784074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.784126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.784296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.784372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.784573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.784646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.784894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.784966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.088 qpair failed and we were unable to recover it. 00:29:51.088 [2024-11-20 06:40:22.785170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.088 [2024-11-20 06:40:22.785223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.785388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.785442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.785727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.785815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.786051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.786103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.786384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.786438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.786719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.786790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.786990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.787041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.787242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.787295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.787523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.787578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.787804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.787875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.788046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.788101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.788349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.788403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.788638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.788710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.788918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.788971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.789179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.789232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.789522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.789596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.789879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.789951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.790154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.790207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.790426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.790495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.790691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.790765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.790970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.791023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.791222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.791275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.791522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.791593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.791785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.791857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.792065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.792117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.792325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.792379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.792579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.792631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.792824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.792876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.793115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.793168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.793433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.793480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.793632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.793679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.793924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.793996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.794164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.794218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.794441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.794515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.794756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.089 [2024-11-20 06:40:22.794803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.089 qpair failed and we were unable to recover it. 00:29:51.089 [2024-11-20 06:40:22.794981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.795045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.795252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.795320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.795554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.795628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.795865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.795919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.796122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.796175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.796419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.796492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.796759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.796829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.797038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.797101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.797266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.797333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.797535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.797606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.797850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.797920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.798136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.798189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.798462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.798534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.798788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.798860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.799056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.799108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.799349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.799402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.799679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.799750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.799925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.799980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.800183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.800235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.800479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.800551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.800827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.800899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.801113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.801165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.801341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.801395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.801642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.801713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.801959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.802029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.802274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.802332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.090 [2024-11-20 06:40:22.802511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.090 [2024-11-20 06:40:22.802558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.090 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.802712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.802758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.803035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.803106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.803385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.803440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.803644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.803697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.803901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.803953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.804167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.804219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.804500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.804574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.804863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.804936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.805141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.805195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.805479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.805552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.805754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.805827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.806026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.806078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.806325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.806379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.806562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.806640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.806896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.806966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.807207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.807259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.807570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.807645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.807916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.807986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.808143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.808197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.808438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.808512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.808692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.808772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.809013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.809085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.809248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.809323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.809556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.809626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.809899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.809945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.810100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.810149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.810386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.810462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.810688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.810767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.811009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.811063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.811237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.811289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.811569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.811639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.811841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.811913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.812106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.812158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.812385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.812440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.812614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.812667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.812875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.812928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.813117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.813169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.813358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.091 [2024-11-20 06:40:22.813411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.091 qpair failed and we were unable to recover it. 00:29:51.091 [2024-11-20 06:40:22.813649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.813701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.813904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.813957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.814164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.814217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.814425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.814499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.814734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.814806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.815004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.815057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.815226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.815279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.815482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.815556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.815739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.815791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.816029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.816081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.816286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.816355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.816587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.816639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.816874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.816926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.817172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.817224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.817531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.817603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.817844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.817898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.818098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.818151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.818407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.818478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.818667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.818739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.818999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.819070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.819322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.819376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.819613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.819689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.819864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.819948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.820169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.820221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.820486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.820559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.820805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.820876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.821077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.821132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.821334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.821389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.821612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.821685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.821867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.821940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.822173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.822226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.822418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.822494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.822730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.822802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.823037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.823089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.823397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.092 [2024-11-20 06:40:22.823474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.092 qpair failed and we were unable to recover it. 00:29:51.092 [2024-11-20 06:40:22.823707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.823780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.824018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.824092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.824294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.824358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.824565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.824611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.824772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.824818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.825075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.825145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.825403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.825483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.825717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.825769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.825965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.826018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.826250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.826315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.826559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.826630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.826902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.826973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.827167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.827219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.827415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.827471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.827712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.827783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.828047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.828093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.828235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.828281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.828477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.828548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.828758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.828829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.829072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.829144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.829334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.829389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.829632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.829704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.829931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.830003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.830239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.830291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.830539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.830612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.830846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.830918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.831073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.831126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.831292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.831385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.831618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.831664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.831844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.831910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.832101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.832153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.832391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.832445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.832649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.832702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.832910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.832956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.833138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.833202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.833400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.833454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.833619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.833673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.093 [2024-11-20 06:40:22.833852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.093 [2024-11-20 06:40:22.833905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.093 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.834099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.834151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.834344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.834399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.834601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.834656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.834873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.834926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.835161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.835214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.835425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.835479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.835674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.835727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.835968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.836020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.836220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.836274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.836500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.836554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.836790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.836860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.837101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.837154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.837391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.837446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.837655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.837702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.837858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.837922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.838144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.838194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.838373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.838424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.838608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.838657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.838818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.838867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.839093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.839143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.839316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.839389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.839576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.839647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.839887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.839957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.840137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.840191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.840398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.840453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.840670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.840740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.840947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.840999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.841208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.841260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.841500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.841574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.841778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.841859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.842106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.842158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.842366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.842420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.842653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.842725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.842904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.842956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.843165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.843220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.843505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.843560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.843816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.843898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.844079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.844132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.844300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.094 [2024-11-20 06:40:22.844365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.094 qpair failed and we were unable to recover it. 00:29:51.094 [2024-11-20 06:40:22.844604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.844674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.844908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.844982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.845189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.845245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.845458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.845511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.845681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.845736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.845908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.845962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.846194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.846247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.846493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.846566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.846807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.846880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.847078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.847130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.847321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.847374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.847582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.847660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.847905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.847959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.848165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.848217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.848456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.848531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.848754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.848825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.849055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.849108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.849327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.849381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.849587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.849661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.849869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.849940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.850141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.850193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.850420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.850495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.850740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.850810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.851009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.851062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.851243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.851296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.851489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.851542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.851719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.851772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.851975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.852028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.852228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.852282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.852551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.852604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.852759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.095 [2024-11-20 06:40:22.852814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.095 qpair failed and we were unable to recover it. 00:29:51.095 [2024-11-20 06:40:22.853057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.853109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.853334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.853389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.853592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.853647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.853892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.853966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.854173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.854226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.854403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.854478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.854667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.854745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.854980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.855033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.855185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.855237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.855503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.855583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.855812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.855884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.856076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.856128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.856343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.856397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.856668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.856722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.856874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.856927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.857105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.857157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.857406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.857478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.857642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.857697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.857855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.857910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.858113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.858166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.858343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.858398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.858610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.858663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.858877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.858929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.859088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.859141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.859313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.859368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.859580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.859658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.859912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.859977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.860141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.860194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.860387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.860443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.860693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.860745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.860916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.860971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.861152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.861206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.861390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.861443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.861593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.861646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.861817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.861871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.862063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.862144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.862375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.862429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.862636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.096 [2024-11-20 06:40:22.862689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.096 qpair failed and we were unable to recover it. 00:29:51.096 [2024-11-20 06:40:22.862937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.097 [2024-11-20 06:40:22.862989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.097 qpair failed and we were unable to recover it. 00:29:51.097 [2024-11-20 06:40:22.863151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.097 [2024-11-20 06:40:22.863203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.097 qpair failed and we were unable to recover it. 00:29:51.097 [2024-11-20 06:40:22.863442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.097 [2024-11-20 06:40:22.863497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.097 qpair failed and we were unable to recover it. 00:29:51.097 [2024-11-20 06:40:22.863702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.097 [2024-11-20 06:40:22.863755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.097 qpair failed and we were unable to recover it. 00:29:51.097 [2024-11-20 06:40:22.863940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.097 [2024-11-20 06:40:22.863993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.097 qpair failed and we were unable to recover it. 00:29:51.097 [2024-11-20 06:40:22.864195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.097 [2024-11-20 06:40:22.864250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.097 qpair failed and we were unable to recover it. 00:29:51.097 [2024-11-20 06:40:22.864589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.097 [2024-11-20 06:40:22.864689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.097 qpair failed and we were unable to recover it. 00:29:51.097 [2024-11-20 06:40:22.864993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.097 [2024-11-20 06:40:22.865064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.097 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.865356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.865435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.865716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.865782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.866081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.866151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.866384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.866438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.866634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.866711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.866908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.866973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.867234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.867299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.867661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.867739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.868009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.868085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.868326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.868406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.868638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.868719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.868921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.868987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.869217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.869269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.869524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.869626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.869862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.869975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.870184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.870261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.870504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.870580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.870823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.870878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.871050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.871138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.871440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.871496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.871666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.871756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.871919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.871972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.872147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.872199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.872368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.872423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.872626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.872679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.872928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.367 [2024-11-20 06:40:22.872981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.367 qpair failed and we were unable to recover it. 00:29:51.367 [2024-11-20 06:40:22.873174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.873227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.873443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.873516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.873773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.873844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.874017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.874069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.874270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.874342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.874590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.874643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.874848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.874901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.875105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.875158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.875376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.875431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.875634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.875707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.875915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.875968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.876123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.876177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.876407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.876486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.876670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.876725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.876936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.876989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.877198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.877250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.877511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.877586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.877860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.877932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.878099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.878151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.878327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.878381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.878611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.878685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.878898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.878952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.879163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.879215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.879443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.879497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.879668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.879721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.879914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.879967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.880169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.880222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.880447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.880519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.880743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.880814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.881008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.881060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.881269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.881337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.881576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.881630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.881847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.881919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.882114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.882167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.882339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.882402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.882680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.882751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.883025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.883097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.368 [2024-11-20 06:40:22.883323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.368 [2024-11-20 06:40:22.883377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.368 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.883559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.883634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.883882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.883953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.884140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.884193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.884360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.884415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.884607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.884685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.884929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.885001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.885198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.885250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.885487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.885561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.885843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.885916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.886090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.886142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.886374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.886451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.886650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.886723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.886991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.887063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.887266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.887331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.887548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.887620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.887832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.887906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.888117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.888171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.888382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.888458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.888630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.888706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.888873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.888926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.889139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.889191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.889393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.889447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.889651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.889706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.889929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.889983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.890185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.890237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.890491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.890564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.890762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.890835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.891003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.891058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.891262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.891331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.891518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.891572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.891777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.891830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.892015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.892069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.892257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.892324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.892492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.892547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.892717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.892771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.893006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.893059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.893222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.893277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.369 qpair failed and we were unable to recover it. 00:29:51.369 [2024-11-20 06:40:22.893490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.369 [2024-11-20 06:40:22.893545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.893748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.893820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.894057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.894110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.894272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.894344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.894524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.894577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.894804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.894876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.895054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.895105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.895269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.895338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.895510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.895563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.895725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.895777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.895974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.896027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.896222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.896275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.896526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.896578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.896802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.896854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.897089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.897141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.897380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.897435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.897676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.897729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.897963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.898015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.898212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.898263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.898501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.898574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.898862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.898939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.899121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.899173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.899358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.899438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.899645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.899717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.899964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.900042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.900210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.900262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.900532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.900594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.900827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.900879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.901082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.901135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.901293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.901363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.901529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.901583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.901757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.901810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.902022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.902075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.902270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.902340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.902582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.902635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.902878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.902950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.903120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.903173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.370 [2024-11-20 06:40:22.903370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.370 [2024-11-20 06:40:22.903450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.370 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.903718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.903789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.904019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.904071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.904224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.904276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.904462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.904537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.904749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.904801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.904950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.905001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.905210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.905262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.905500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.905574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.905800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.905874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.906075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.906128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.906348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.906401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.906608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.906680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.906919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.906973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.907146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.907199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.907397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.907471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.907754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.907826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.908020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.908073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.908262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.908326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.908549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.908623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.908871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.908943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.909142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.909194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.909365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.909418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.909654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.909726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.909957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.910027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.910233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.910285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.910522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.910592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.910856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.910928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.911099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.911151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.911377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.911460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.911736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.911807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.911978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.912032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.371 [2024-11-20 06:40:22.912233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.371 [2024-11-20 06:40:22.912288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.371 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.912484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.912537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.912701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.912754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.912946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.912999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.913179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.913231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.913444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.913498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.913675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.913729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.913890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.913942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.914173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.914226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.914449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.914502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.914719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.914772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.914990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.915043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.915198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.915250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.915469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.915541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.915814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.915884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.916042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.916095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.916274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.916344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.916603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.916675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.916908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.916980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.917195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.917248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.917504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.917579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.917773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.917847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.918028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.918081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.918291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.918375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.918576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.918650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.918928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.919001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.919212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.919266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.919492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.919564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.919801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.919878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.920112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.920166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.920335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.920389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.920579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.920658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.920887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.920962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.921114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.921167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.921377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.921430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.921642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.921695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.921897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.921950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.922117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.922178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.372 [2024-11-20 06:40:22.922420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.372 [2024-11-20 06:40:22.922493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.372 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.922711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.922763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.922936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.922988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.923198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.923250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.923485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.923557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.923798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.923876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.924047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.924102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.924347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.924402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.924626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.924698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.924917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.924991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.925155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.925209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.925403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.925481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.925683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.925755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.925977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.926029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.926232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.926285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.926597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.926696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.926870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.926923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.927152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.927205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.927475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.927548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.927765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.927838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.928032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.928084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.928298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.928364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.928592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.928665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.928902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.928974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.929206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.929258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.929452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.929533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.929758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.929829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.930075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.930149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.930380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.930458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.930654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.930730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.930909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.930982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.931194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.931247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.931446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.931520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.931760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.931813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.932012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.932066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.932328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.932383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.932591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.932663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.373 [2024-11-20 06:40:22.932866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.373 [2024-11-20 06:40:22.932968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:51.373 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.933269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.933396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.933647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.933734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.933995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.934062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.934324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.934380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.934590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.934646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.934948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.935015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.935261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.935333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.935515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.935568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.935739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.935819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.936130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.936197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.936469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.936523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.936721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.936786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.937048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.937114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.937352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.937406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.937577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.937629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.937836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.937903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.938174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.938240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.938493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.938547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.938842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.938894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.939103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.939168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.939391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.939445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.939642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.939694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.939973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.940053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.940281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.940344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.940550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.940605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.940833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.940898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.941120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.941186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.941440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.941494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.941753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.941820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.942058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.942124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.942351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.942404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.942566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.942618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.942787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.942868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.943150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.943216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.943442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.943495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.943722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.943789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.944017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.944082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.374 [2024-11-20 06:40:22.944356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.374 [2024-11-20 06:40:22.944410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.374 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.944609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.944688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.944971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.945037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.945255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.945352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.945559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.945622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.945817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.945870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.946052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.946118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.946385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.946439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.946627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.946679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.946932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.947001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.947256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.947354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.947597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.947650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.947855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.947921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.948212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.948277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.948541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.948609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.948830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.948897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.949184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.949249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.949469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.949536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.949799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.949865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.950066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.950132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.950337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.950404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.950684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.950750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.950999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.951065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.951355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.951422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.951627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.951695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.951939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.952005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.952217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.952284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.952506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.952572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.952802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.952868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.953102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.953169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.953399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.953462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.953724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.953787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.954068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.954134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.954391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.954458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.954661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.954726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.954944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.955011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.955188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.955255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.955469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.375 [2024-11-20 06:40:22.955537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.375 qpair failed and we were unable to recover it. 00:29:51.375 [2024-11-20 06:40:22.955833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.955899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.956196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.956261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.956511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.956577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.956841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.956908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.957152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.957218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.957467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.957534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.957776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.957853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.958068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.958134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.958383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.958450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.958715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.958781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.958987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.959053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.959295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.959375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.959596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.959663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.959885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.959951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.960184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.960250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.960515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.960583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.960838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.960904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.961125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.961193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.961466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.961534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.961772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.961838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.962061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.962127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.962350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.962419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.962642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.962708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.962922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.962988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.963241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.963319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.963565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.963631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.963885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.963951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.964230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.964296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.964591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.964657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.964861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.964930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.965188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.965255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.965512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.965610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.965904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.965972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.966258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.376 [2024-11-20 06:40:22.966353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.376 qpair failed and we were unable to recover it. 00:29:51.376 [2024-11-20 06:40:22.966622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.966689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.966967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.967031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.967230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.967296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.967604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.967675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.967965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.968032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.968283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.968363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.968638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.968704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.968944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.969011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.969290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.969370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.969617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.969683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.969937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.970003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.970218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.970287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.970584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.970662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.970950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.971016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.971245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.971330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.971557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.971623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.971869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.971934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.972159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.972226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.972501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.972568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.972789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.972855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.973069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.973138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.973338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.973406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.973657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.973723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.973952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.974018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.974267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.974347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.974558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.974623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.974878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.974944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.975198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.975265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.975514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.975580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.975821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.975890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.976093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.976160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.976373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.976442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.976696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.976763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.977010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.977077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.977338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.977408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.977680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.977747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.977966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.978031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.377 [2024-11-20 06:40:22.978340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.377 [2024-11-20 06:40:22.978408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.377 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.978698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.978765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.979003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.979099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.979360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.979433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.979729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.979802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.980060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.980124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.980361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.980428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.980637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.980706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.980925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.980992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.981218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.981284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.981529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.981596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.981861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.981925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.982140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.982204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.982495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.982567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.982795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.982859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.983098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.983164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.983468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.983551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.983766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.983836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.984059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.984126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.984408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.984478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.984765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.984839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.985092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.985156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.985405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.985473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.985681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.985749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.986035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.986108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.986332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.986399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.986668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.986733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.986951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.987018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.987318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.987402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.987668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.987746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.988002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.988069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.988339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.988407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.988694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.988762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.988988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.989053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.989285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.989374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.989626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.989691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.989935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.990015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.990326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.378 [2024-11-20 06:40:22.990393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.378 qpair failed and we were unable to recover it. 00:29:51.378 [2024-11-20 06:40:22.990642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.990707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.990962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.991028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.991319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.991391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.991644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.991709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.991924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.991991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.992225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.992290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.992588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.992656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.992918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.992983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.993226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.993291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.993591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.993658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.993931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.993998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.994269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.994359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.994604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.994670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.994894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.994958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.995210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.995278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.995589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.995656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.995929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.995995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.996201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.996268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.996611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.996691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.996934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.996999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.997261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.997350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.997603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.997669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.997861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.997933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.998135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.998198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.998470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.998537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.998799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.998864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.999121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.999203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.999478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.999544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:22.999809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:22.999874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:23.000153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:23.000219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:23.000484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:23.000553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:23.000808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:23.000874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:23.001130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:23.001196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:23.001434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:23.001502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:23.001802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:23.001870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:23.002157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:23.002223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:23.002512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:23.002579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:23.002818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:23.002884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.379 [2024-11-20 06:40:23.003092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.379 [2024-11-20 06:40:23.003160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.379 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.003456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.003524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.003770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.003836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.004127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.004202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.004466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.004536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.004771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.004838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.005113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.005147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.005268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.005327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.005503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.005538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.005676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.005711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.005845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.005886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.006002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.006037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.006149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.006184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.006324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.006374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.006499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.006532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.006671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.006700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.006798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.006878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.007168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.007232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.007445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.007481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.007634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.007664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.007811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.007840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.008088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.008153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.008382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.008418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.008570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.008634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.008843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.008907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.009144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.009174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.009281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.009319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.009456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.009487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.009577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.009657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.009935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.010017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.010365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.010396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.010596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.010666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.010929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.011004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.011255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.011284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.011408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.011438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.011568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.011597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.011703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.011734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.011931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.011999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.012176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.380 [2024-11-20 06:40:23.012205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.380 qpair failed and we were unable to recover it. 00:29:51.380 [2024-11-20 06:40:23.012299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.012339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.012460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.012490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.012617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.012645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.012766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.012795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.012973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.013038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.013253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.013282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.013397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.013427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.013529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.013559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.013685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.013721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.013822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.013876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.014162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.014227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.014461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.014490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.014611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.014658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.014939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.015003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.015241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.015329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.015464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.015499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.015658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.015725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.016000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.016065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.016355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.016385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.016522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.016552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.016698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.016726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.016848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.016876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.017164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.017240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.017451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.017480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.017599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.017629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.017915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.017979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.018245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.018347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.018537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.018566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.018738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.018812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.019001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.019080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.019355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.019385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.019523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.019551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.019670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.019699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.019819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.381 [2024-11-20 06:40:23.019850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.381 qpair failed and we were unable to recover it. 00:29:51.381 [2024-11-20 06:40:23.019974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.020003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.020149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.020213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.020409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.020454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.020568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.020642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.020928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.020992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.021228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.021293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.021489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.021527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.021630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.021659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.021785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.021814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.022070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.022135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.022355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.022392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.022521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.022551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.022783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.022848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.023045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.023113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.023390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.023421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.023552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.023581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.023688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.023716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.023840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.023877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.024179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.024247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.024447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.024477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.024579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.024607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.024826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.024894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.025188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.025253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.025474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.025503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.025724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.025799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.026088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.026153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.026378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.026407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.026550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.026579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.026699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.026729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.026852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.026917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.027182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.027247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.027621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.027703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.027951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.028016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.028325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.028393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.028613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.028678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.028946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.029013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.029240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.029322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.029561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.382 [2024-11-20 06:40:23.029627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.382 qpair failed and we were unable to recover it. 00:29:51.382 [2024-11-20 06:40:23.029876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.029941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.030207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.030274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.030552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.030620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.030905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.030970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.031240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.031326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.031637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.031718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.031962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.032027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.032248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.032336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.032579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.032645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.032866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.032933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.033212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.033277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.033636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.033701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.033939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.034018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.034265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.034356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.034600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.034664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.034937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.035001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.035213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.035281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.035519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.035583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.035866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.035930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.036162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.036229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.036539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.036607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.036847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.036913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.037115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.037181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.037458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.037525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.037776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.037843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.038103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.038167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.038393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.038460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.038742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.038806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.039066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.039135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.039392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.039459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.039704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.039772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.040019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.040083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.040393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.040463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.040689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.040754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.041032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.041096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.041415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.041499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.041709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.041775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.041992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.042057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.383 [2024-11-20 06:40:23.042329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.383 [2024-11-20 06:40:23.042401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.383 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.042619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.042684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.042916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.042982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.043235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.043299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.043587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.043652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.043855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.043921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.044162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.044241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.044494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.044561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.044808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.044874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.045074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.045139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.045360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.045432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.045717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.045782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.046064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.046130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.046363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.046431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.046710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.046791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.047034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.047099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.047359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.047426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.047702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.047767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.048013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.048093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.048344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.048412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.048693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.048758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.048991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.049067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.049344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.049417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.049659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.049724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.049968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.050032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.050328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.050396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.050660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.050727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.050972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.051036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.051234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.051319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.051582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.051662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.051964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.052031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.052287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.052372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.052593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.052658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.052945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.053010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.053287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.053377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.053613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.053679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.053898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.053962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.054162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.054227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.384 [2024-11-20 06:40:23.054554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.384 [2024-11-20 06:40:23.054624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.384 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.054836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.054900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.055143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.055208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.055477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.055556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.055827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.055893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.056150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.056217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.056521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.056588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.056839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.056907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.057188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.057252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.057526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.057592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.057830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.057906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.058151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.058218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.058538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.058606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.058853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.058919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.059202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.059284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.059592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.059659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.059906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.059971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.060178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.060253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.060586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.060656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.060902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.060969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.061216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.061281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.061597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.061663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.061954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.062019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.062275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.062367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.062596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.062661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.062942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.063006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.063248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.063355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.063652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.063718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.063959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.064023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.064281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.064370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.064603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.064667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.064920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.064985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.065225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.065288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.065562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.065627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.065912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.065978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.385 qpair failed and we were unable to recover it. 00:29:51.385 [2024-11-20 06:40:23.066225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.385 [2024-11-20 06:40:23.066290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.066560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.066624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.066841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.066908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.067134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.067199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.067512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.067579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.067820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.067884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.068177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.068242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.068455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.068521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.068767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.068832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.069111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.069176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.069425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.069490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.069702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.069767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.070024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.070089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.070342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.070407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.070649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.070713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.070964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.071029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.071238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.071316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.071533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.071598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.071834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.071899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.072155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.072218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.072485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.072552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.072776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.072841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.073078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.073143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.073409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.073478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.073763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.073828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.074070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.074134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.074378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.074445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.074734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.074798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.075030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.075094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.075352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.075419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.075644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.075707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.075942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.076006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.076219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.076284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.076574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.076640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.076862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.076930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.077145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.077210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.077442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.077510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.077750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.077819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.386 qpair failed and we were unable to recover it. 00:29:51.386 [2024-11-20 06:40:23.078095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.386 [2024-11-20 06:40:23.078160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.078435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.078501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.078760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.078824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.079108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.079172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.079432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.079498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.079726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.079804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.080022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.080086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.080372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.080438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.080661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.080726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.080972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.081036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.081266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.081348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.081589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.081654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.081933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.081998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.082278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.082365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.082572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.082639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.082930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.082995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.083243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.083326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.083581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.083645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.083887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.083953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.084232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.084296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.084582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.084650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.084937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.085002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.085208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.085273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.085582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.085648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.085935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.085999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.086239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.086325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.086606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.086672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.086880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.086945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.087235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.087301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.087569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.087635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.087880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.087942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.088179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.088241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.088556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.088632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.088835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.088902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.089194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.089260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.089531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.089596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.089876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.089940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.090227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.387 [2024-11-20 06:40:23.090292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.387 qpair failed and we were unable to recover it. 00:29:51.387 [2024-11-20 06:40:23.090566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.090630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.090868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.090932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.091222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.091287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.091590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.091656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.091894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.091959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.092216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.092281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.092546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.092614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.092840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.092905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.093130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.093195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.093495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.093562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.093779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.093843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.094096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.094160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.094401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.094470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.094709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.094774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.095058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.095122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.095374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.095440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.095729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.095795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.096080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.096146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.096345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.096410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.096603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.096669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.096957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.097023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.097228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.097326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.097573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.097641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.097898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.097963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.098182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.098248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.098531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.098598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.098840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.098906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.099114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.099180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.099429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.099496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.099773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.099839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.100130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.100194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.100390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.100456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.100691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.100756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.100969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.101035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.101248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.101327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.101592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.101657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.101943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.102008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.102256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.102338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.388 qpair failed and we were unable to recover it. 00:29:51.388 [2024-11-20 06:40:23.102587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.388 [2024-11-20 06:40:23.102651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.102869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.102934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.103198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.103262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.103497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.103562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.103824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.103888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.104138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.104203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.104502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.104569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.104812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.104878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.105159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.105224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.105505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.105571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.105794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.105858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.106113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.106177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.106412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.106478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.106721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.106786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.107045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.107110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.107332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.107400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.107636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.107699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.107901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.107966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.108219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.108283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.108510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.108574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.108777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.108841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.109039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.109101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.109348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.109414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.109639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.109704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.109949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.110015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.110338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.110404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.110664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.110729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.111008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.111074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.111365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.111432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.111686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.111751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.112004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.112068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.112269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.112348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.112564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.112628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.112878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.112944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.113222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.113286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.113579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.113644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.113887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.113951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.114199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.114264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.389 [2024-11-20 06:40:23.114555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.389 [2024-11-20 06:40:23.114621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.389 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.114841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.114906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.115119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.115184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.115368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.115435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.115670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.115735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.116036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.116100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.116319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.116385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.116630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.116696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.116974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.117038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.117292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.117371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.117627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.117691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.117985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.118050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.118257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.118341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.118632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.118707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.118934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.118999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.119215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.119280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.119557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.119621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.119826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.119893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.120173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.120237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.120507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.120573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.120815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.120878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.121119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.121183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.121430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.121496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.121746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.121809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.121997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.122061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.122321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.122391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.122634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.122697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.123003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.123067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.123282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.123365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.123585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.123648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.123927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.123991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.124226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.124291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.124528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.124592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.124825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.124889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.125132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.390 [2024-11-20 06:40:23.125197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.390 qpair failed and we were unable to recover it. 00:29:51.390 [2024-11-20 06:40:23.125436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.125501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.125782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.125847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.126137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.126202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.126446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.126511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.126697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.126761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.127019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.127094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.127340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.127406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.127656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.127720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.127966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.128030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.128245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.128320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.128521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.128585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.128831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.128894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.129147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.129211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.129527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.129592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.129872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.129937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.130184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.130248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.130479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.130544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.130758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.130822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.131105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.131169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.131406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.131471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.131716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.131783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.132000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.132065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.132318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.132385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.132573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.132638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.132878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.132942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.133207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.133272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.133534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.133600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.133836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.133899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.134168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.134234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.134477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.134544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.134820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.134883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.135106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.135170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.135433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.135498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.135718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.135783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.135981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.136045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.136335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.136401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.136590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.136654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.391 [2024-11-20 06:40:23.136941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.391 [2024-11-20 06:40:23.137005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.391 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.137251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.137331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.137593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.137658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.137937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.138002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.138246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.138327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.138590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.138655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.138908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.138972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.139190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.139254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.139588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.139653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.139898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.139963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.140242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.140327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.140618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.140684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.140925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.140990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.141228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.141293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.141556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.141620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.141825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.141891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.142169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.142234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.142490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.142555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.142839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.142903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.143200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.143265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.143544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.143609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.143891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.143956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.144200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.144264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.144538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.144603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.144849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.144913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.145159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.145222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.145511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.145577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.145823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.145888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.146170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.146234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.146506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.146571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.146835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.146899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.147146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.147209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.147451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.147516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.147760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.147827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.148118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.148183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.148437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.148502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.148754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.148832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.149068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.149134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.392 [2024-11-20 06:40:23.149429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.392 [2024-11-20 06:40:23.149494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.392 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.149695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.149760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.149964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.150028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.150282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.150360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.150583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.150648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.150927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.150992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.151239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.151331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.151580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.151645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.151907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.151971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.152210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.152275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.152547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.152613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.152817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.152881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.153100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.153164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.153420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.153486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.153712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.153778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.153987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.154051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.154239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.154318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.154604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.154667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.154962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.155026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.155273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.155374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.155653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.155718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.155918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.155983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.156230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.156294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.156570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.156634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.156872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.156936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.157176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.157249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.157516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.157585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.157789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.157854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.158129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.158195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.158433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.158499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.158785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.158849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.159069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.159133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.159421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.159487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.159741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.159806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.159999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.160065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.160360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.160436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.160677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.160743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.160989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.161053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.393 qpair failed and we were unable to recover it. 00:29:51.393 [2024-11-20 06:40:23.161335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.393 [2024-11-20 06:40:23.161402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.161660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.161726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.161947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.162015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.162270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.162347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.162598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.162664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.162912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.162976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.163233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.163297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.163543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.163608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.163821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.163884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.164179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.164243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.164475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.164543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.164827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.164892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.165100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.165165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.165379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.165446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.165699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.165774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.166010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.166075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.166324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.166389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.166675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.166740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.166991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.167055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.167252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.167330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.167551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.167616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.167907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.167971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.168178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.168245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.168512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.168578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.168844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.168908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.169151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.169220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.169495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.169561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.169765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.169830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.170120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.170186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.170399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.170464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.170705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.170770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.171014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.171080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.171335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.171401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.171639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.394 [2024-11-20 06:40:23.171703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.394 qpair failed and we were unable to recover it. 00:29:51.394 [2024-11-20 06:40:23.171930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.171995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.172245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.172322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.172568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.172632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.172879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.172944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.173191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.173255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.173521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.173587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.173833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.173899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.174125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.174190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.174480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.174546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.174793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.174857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.175139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.175204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.175449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.175515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.175801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.175865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.176112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.176176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.176432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.176498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.176752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.176816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.177067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.177132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.177411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.177476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.177713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.177777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.177994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.178059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.178338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.178403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.178704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.178768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.179012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.179080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.179331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.179399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.179607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.179673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.179911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.179976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.180206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.180270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.180551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.180618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.180873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.180946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.181167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.181232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.181461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.181527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.181722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.181786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.181990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.182055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.182295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.182379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.182663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.182727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.182978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.183043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.183285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.183372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.183632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.395 [2024-11-20 06:40:23.183696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.395 qpair failed and we were unable to recover it. 00:29:51.395 [2024-11-20 06:40:23.183982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.184047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.184269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.184372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.184583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.184647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.184872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.184936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.185141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.185206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.185514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.185583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.185800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.185865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.186091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.186156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.186438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.186505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.186797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.186860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.187070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.187146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.187379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.187444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.187690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.187754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.187998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.188063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.188366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.188432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.188682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.188747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.188993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.189057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.189297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.189377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.189590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.189656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.189909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.396 [2024-11-20 06:40:23.189973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.396 qpair failed and we were unable to recover it. 00:29:51.396 [2024-11-20 06:40:23.190253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.190337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.190588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.190653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.190915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.190979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.191220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.191284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.191618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.191684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.191963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.192027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.192361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.192430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.192662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.192723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.192926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.192991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.193240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.193325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.193545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.193611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.193881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.193946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.194162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.194228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.194489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.673 [2024-11-20 06:40:23.194555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.673 qpair failed and we were unable to recover it. 00:29:51.673 [2024-11-20 06:40:23.194811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.194875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.195077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.195141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.195333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.195399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.195649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.195724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.195938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.196002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.196209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.196273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.196557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.196621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.196836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.196900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.197184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.197249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.197529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.197594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.197837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.197902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.198157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.198222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.198439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.198505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.198784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.198848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.199074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.199138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.199386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.199452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.199688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.199752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.199980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.200044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.200294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.200373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.200615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.200678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.200924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.200989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.201200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.201264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.201550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.201615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.201833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.201897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.202172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.202236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.202539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.202606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.202859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.202926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.674 [2024-11-20 06:40:23.203208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.674 [2024-11-20 06:40:23.203272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.674 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.203500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.203565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.203833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.203898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.204094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.204158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.204412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.204478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.204731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.204795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.205036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.205100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.205329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.205401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.205691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.205758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.206036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.206108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.206382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.206448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.206699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.206764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.207007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.207074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.207295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.207374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.207585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.207651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.207891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.207955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.208205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.208269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.208602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.208667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.208908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.208973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.209202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.209266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.209557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.209621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.209857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.209924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.210189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.210253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.210544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.210644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.210917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.210990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.211258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.211351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.211652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.211718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.211933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.212011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.212254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.212347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.212576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.212644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.212904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.212969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.213211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.213275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.213581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.213645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.213889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.213953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.214237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.214317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.214610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.214675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.214921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.214986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.675 [2024-11-20 06:40:23.215199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.675 [2024-11-20 06:40:23.215262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.675 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.215554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.215621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.215873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.215938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.216197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.216261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.216537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.216604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.216794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.216860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.217116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.217180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.217439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.217506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.217795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.217859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.218106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.218169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.218384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.218453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.218686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.218752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.219005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.219069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.219325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.219393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.219681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.219747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.220036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.220100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.220349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.220417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.220669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.220734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.221023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.221087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.221330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.221396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.221650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.221714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.221950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.222014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.222325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.222391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.222612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.222677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.222919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.222985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.223236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.223337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.223562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.223627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.223874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.223938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.224145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.224210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.224454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.224520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.224766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.224829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.225048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.225112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.225300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.225385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.225622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.225686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.225925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.226006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.676 [2024-11-20 06:40:23.226210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.676 [2024-11-20 06:40:23.226274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.676 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.226532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.226596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.226809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.226873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.227066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.227129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.227370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.227436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.227720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.227784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.228029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.228093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.228284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.228374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.228579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.228643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.228881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.228945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.229226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.229290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.229612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.229676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.229924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.229987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.230214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.230279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.230535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.230600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.230830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.230894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.231180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.231243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.231543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.231609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.231847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.231910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.232170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.232234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.232523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.232589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.232849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.232912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.233112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.233175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.233437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.233506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.233735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.233799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.234084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.234149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.234397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.234474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.234716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.234780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.235023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.235087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.235383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.235448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.235675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.235739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.235997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.677 [2024-11-20 06:40:23.236060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.677 qpair failed and we were unable to recover it. 00:29:51.677 [2024-11-20 06:40:23.236325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.236390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.236640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.236704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.236912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.236978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.237224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.237286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.237575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.237640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.237879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.237944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.238220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.238285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.238518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.238583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.238886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.238951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.239234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.239298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.239616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.239680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.239929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.239993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.240266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.240349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.240576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.240639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.240915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.240980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.241185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.241248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.241530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.241595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.241864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.241928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.242165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.242233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.242497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.242562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.242809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.242873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.243093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.243169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.243409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.243474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.243723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.243788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.244080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.244145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.244356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.244422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.244677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.244741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.245024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.245088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.245343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.245409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.245665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.245729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.246017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.678 [2024-11-20 06:40:23.246081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.678 qpair failed and we were unable to recover it. 00:29:51.678 [2024-11-20 06:40:23.246322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.246388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.246665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.246729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.246940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.247003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.247254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.247336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.247573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.247638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.247846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.247909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.248144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.248209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.248475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.248542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.248745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.248808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.249008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.249071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.249352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.249418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.249669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.249737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.249984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.250049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.250265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.250349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.250599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.250663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.250909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.250973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.251254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.251334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.251577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.251641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.251867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.251932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.252163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.252226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.252482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.252547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.252791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.252855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.253098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.253162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.253411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.253478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.253749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.253812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.254056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.254121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.254402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.254467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.254701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.254765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.254995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.679 [2024-11-20 06:40:23.255060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.679 qpair failed and we were unable to recover it. 00:29:51.679 [2024-11-20 06:40:23.255263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.255340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.255592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.255656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.255908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.255983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.256231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.256295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.256561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.256624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.256836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.256901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.257143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.257207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.257449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.257515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.257754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.257819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.258006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.258070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.258366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.258432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.258684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.258750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.258997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.259061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.259340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.259406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.259712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.259777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.259991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.260055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.260331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.260403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.260714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.260780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.261031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.261095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.261338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.261408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.261700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.261763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.262016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.262080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.262351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.262418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.262665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.262729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.262971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.263036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.263281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.263368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.263585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.263650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.263889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.263954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.264220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.264284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.264548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.264623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.264876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.264941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.265180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.265244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.265473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.265539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.265785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.265851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.266144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.266208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.266428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.266497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.266740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.680 [2024-11-20 06:40:23.266803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.680 qpair failed and we were unable to recover it. 00:29:51.680 [2024-11-20 06:40:23.267005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.267069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.267285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.267368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.267615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.267677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.267907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.267973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.268254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.268346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.268652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.268718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.268984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.269049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.269296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.269381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.269599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.269663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.269902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.269966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.270208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.270273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.270602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.270666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.270908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.270973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.271219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.271284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.271564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.271627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.271874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.271938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.272151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.272216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.272475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.272543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.272838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.272902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.273116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.273195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.273465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.273531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.273703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.273767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.273960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.274025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.274270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.274367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.274618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.274683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.274875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.274939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.275191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.275256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.275565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.275630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.275912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.275975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.276215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.276283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.276562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.276627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.276842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.276906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.277158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.277223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.277472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.277540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.277800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.277864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.681 qpair failed and we were unable to recover it. 00:29:51.681 [2024-11-20 06:40:23.278043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.681 [2024-11-20 06:40:23.278108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.278352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.278419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.278674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.278737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.278982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.279046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.279263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.279341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.279602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.279666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.279895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.279959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.280201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.280266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.280505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.280570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.280805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.280869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.281079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.281144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.281404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.281470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.281696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.281763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.282046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.282111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.282391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.282457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.282702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.282766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.282970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.283033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.283270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.283349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.283637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.283700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.283974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.284037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.284290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.284369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.284643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.284707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.284997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.285060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.285332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.285398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.285690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.285754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.286010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.286074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.286318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.286385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.286663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.286726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.286957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.287020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.287260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.682 [2024-11-20 06:40:23.287344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.682 qpair failed and we were unable to recover it. 00:29:51.682 [2024-11-20 06:40:23.287640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.287704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.287945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.288009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.288225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.288289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.288551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.288615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.288847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.288911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.289145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.289210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.289452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.289518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.289804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.289869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.290080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.290144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.290413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.290479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.290771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.290836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.291051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.291115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.291348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.291412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.291639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.291703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.291951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.292015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.292229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.292292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.292554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.292618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.292869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.292933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.293216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.293280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.293503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.293567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.293822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.293887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.294128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.294191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.294449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.294526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.294790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.294855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.295108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.295172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.295428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.295494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.295701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.295766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.296026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.296090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.296349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.296415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.296660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.296725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.296970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.297034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.297281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.297364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.297574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.297639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.297915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.297978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.298202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.298267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.683 qpair failed and we were unable to recover it. 00:29:51.683 [2024-11-20 06:40:23.298510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.683 [2024-11-20 06:40:23.298575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.298833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.298898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.299091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.299155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.299367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.299434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.299649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.299713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.299996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.300060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.300263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.300342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.300584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.300650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.300905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.300970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.301218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.301283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.301611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.301675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.301919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.301983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.302256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.302334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.302584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.302647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.302853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.302929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.303185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.303250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.303508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.303574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.303816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.303882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.304100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.304165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.304410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.304475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.304718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.304785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.305022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.305086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.305363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.305432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.305668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.305734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.305966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.306031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.306228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.306292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.306565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.306630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.306858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.306926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.307160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.307227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.307466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.307532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.307790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.307854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.308075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.308140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.684 [2024-11-20 06:40:23.308376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.684 [2024-11-20 06:40:23.308445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.684 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.308733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.308796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.309035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.309103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.309402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.309468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.309712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.309777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.310069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.310134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.310358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.310425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.310710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.310775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.310987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.311052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.311295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.311405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.311667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.311732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.311928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.311993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.312249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.312333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.312588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.312653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.312868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.312932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.313188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.313251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.313485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.313550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.313787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.313852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.314095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.314160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.314394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.314461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.314742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.314806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.315021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.315085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.315370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.315437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.315649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.315713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.315944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.316008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.316259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.316338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.316575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.316639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.316916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.316983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.317219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.317283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.317507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.317571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.317815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.317882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.318158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.318224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.318483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.318550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.318764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.318832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.319040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.319104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.319383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.319449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.319712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.319778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.320030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.320093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.685 [2024-11-20 06:40:23.320345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.685 [2024-11-20 06:40:23.320410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.685 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.320606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.320671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.320951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.321015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.321259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.321338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.321550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.321613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.321866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.321930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.322188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.322253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.322523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.322588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.322825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.322890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.323140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.323204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.323420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.323487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.323734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.323799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.324079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.324154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.324457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.324524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.324764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.324828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.325067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.325131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.325371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.325437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.325718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.325783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.326037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.326101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.326384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.326449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.326652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.326717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.326958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.327021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.327207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.327275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.327569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.327635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.327920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.327984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.328190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.328255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.328544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.328610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.328803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.328867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.329066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.329130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.329348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.329415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.329656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.329720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.329963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.330027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.330274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.330351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.330595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.330660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.330915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.330979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.331221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.331285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.331540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.331605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.331884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.686 [2024-11-20 06:40:23.331948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.686 qpair failed and we were unable to recover it. 00:29:51.686 [2024-11-20 06:40:23.332239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.332319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.332587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.332663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.332936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.333000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.333246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.333329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.333590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.333654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.333857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.333922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.334111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.334175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.334445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.334510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.334751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.334816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.335074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.335138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.335406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.335472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.335718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.335783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.336025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.336089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.336345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.336411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.336627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.336695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.336947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.337011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.337235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.337299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.337586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.337651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.337922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.337986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.338213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.338277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.338513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.338578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.338801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.338864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.339155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.339219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.339503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.339569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.339851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.339915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.340129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.340193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.340446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.340512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.340802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.340866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.341119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.341193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.341461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.341527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.341760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.341823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.342059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.342123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.342375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.342441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.342696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.342760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.343005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.343070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.343248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.343325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.343559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.343623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.687 [2024-11-20 06:40:23.343853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.687 [2024-11-20 06:40:23.343918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.687 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.344196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.344261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.344529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.344594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.344874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.344939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.345185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.345250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.345487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.345551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.345765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.345830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.346021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.346086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.346339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.346405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.346611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.346675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.346900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.346965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.347200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.347264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.347559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.347624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.347831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.347896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.348113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.348177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.348463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.348530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.348751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.348815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.349100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.349165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.349464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.349530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.349752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.349816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.350042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.350107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.350398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.350465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.350715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.350779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.351035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.351099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.351356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.351423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.351623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.351687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.351968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.352052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.352384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.352473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.352776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.352844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.353064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.353129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.353409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.353477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.353695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.353760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.354033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.354098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.688 qpair failed and we were unable to recover it. 00:29:51.688 [2024-11-20 06:40:23.354351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.688 [2024-11-20 06:40:23.354419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.354677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.354741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.354966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.355030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.355318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.355387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.355677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.355741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.355948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.356014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.356266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.356350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.356607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.356673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.356919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.356983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.357222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.357286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.357571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.357636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.357899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.357963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.358248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.358355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.358598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.358663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.358871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.358937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.359203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.359267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.359575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.359640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.359860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.359924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.360174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.360239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.360546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.360616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.360820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.360884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.361109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.361172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.361460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.361529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.361811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.361875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.362124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.362188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.362472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.362539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.362823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.362897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.363146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.363210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.363441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.363508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.363727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.363791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.364016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.364082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.364337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.364407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.364653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.364717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.364958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.365023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.365273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.365364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.365583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.365650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.365928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.365992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.366200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.689 [2024-11-20 06:40:23.366264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.689 qpair failed and we were unable to recover it. 00:29:51.689 [2024-11-20 06:40:23.366505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.366570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.366817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.366880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.367172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.367236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.367562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.367629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.367840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.367904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.368148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.368212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.368432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.368498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.368728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.368792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.369024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.369088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.369368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.369434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.369684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.369747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.369951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.370015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.370225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.370294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.370563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.370628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.370850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.370917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.371178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.371254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.371538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.371606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.371897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.371961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.372201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.372266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.372494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.372559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.372779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.372843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.373091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.373156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.373359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.373426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.373702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.373767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.374053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.374118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.374336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.374401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.374584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.374649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.374855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.374919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.375138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.375202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.375492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.375558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.375788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.375853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.376099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.376164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.376373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.376438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.376708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.376773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.377010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.377074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.377335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.377401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.377650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.377714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.377976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.690 [2024-11-20 06:40:23.378040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.690 qpair failed and we were unable to recover it. 00:29:51.690 [2024-11-20 06:40:23.378253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.378334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.378581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.378646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.378859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.378923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.379122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.379186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.379449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.379515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.379773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.379837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.380114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.380179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.380434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.380500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.380743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.380806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.381047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.381111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.381391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.381457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.381714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.381778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.382022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.382086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.382280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.382361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.382608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.382672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.382928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.382992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.383199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.383265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.383521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.383586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.383804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.383869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.384161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.384227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.384500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.384566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.384854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.384919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.385166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.385233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.385474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.385539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.385787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.385852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.386104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.386168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.386455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.386522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.386774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.386838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.387123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.387186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.387386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.387453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.387673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.387738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.387945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.388010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.388296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.388375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.388612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.388676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.388918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.388983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.389240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.389318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.389578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.389642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.691 [2024-11-20 06:40:23.389905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.691 [2024-11-20 06:40:23.389970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.691 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.390219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.390283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.390557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.390622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.390907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.390971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.391214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.391279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.391525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.391590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.391805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.391871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.392063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.392127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.392382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.392459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.392718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.392783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.393024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.393087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.393333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.393399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.393650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.393718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.393928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.393991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.394250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.394349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.394605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.394670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.394957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.395022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.395320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.395387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.395603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.395672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.395928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.395993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.396204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.396271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.396542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.396607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.396876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.396940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.397175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.397241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.397522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.397588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.397805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.397870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.398106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.398170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.398427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.398493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.398719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.398783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.398998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.399062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.399277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.399357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.399573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.399638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.399857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.399923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.400125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.400190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.400432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.400498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.400749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.400825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.401083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.401146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.401367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.401433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.692 [2024-11-20 06:40:23.401657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.692 [2024-11-20 06:40:23.401722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.692 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.402015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.402079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.402363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.402429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.402636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.402700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.402953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.403017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.403235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.403299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.403616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.403680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.403883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.403949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.404176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.404242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.404513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.404580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.404848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.404911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.405211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.405276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.405573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.405639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.405880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.405944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.406226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.406290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.406575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.406641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.406863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.406927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.407183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.407247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.407519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.407586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.407772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.407834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.408080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.408146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.408427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.408493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.408754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.408819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.409076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.409141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.409359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.409435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.409698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.409763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.410046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.410111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.410387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.410453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.410737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.410801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.411077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.411143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.411373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.411437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.693 qpair failed and we were unable to recover it. 00:29:51.693 [2024-11-20 06:40:23.411651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.693 [2024-11-20 06:40:23.411715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.411993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.412058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.412274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.412368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.412575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.412640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.412881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.412946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.413247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.413351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.413638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.413703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.413957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.414021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.414234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.414323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.414554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.414618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.414819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.414882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.415085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.415149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.415517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.415584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.415871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.415935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.416175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.416240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.416480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.416547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.416791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.416856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.417069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.417133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.417350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.417416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.417622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.417688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.417922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.417986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.418221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.418285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.418591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.418656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.418906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.418970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.419245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.419327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.419563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.419627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.419875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.419941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.420223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.420286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.420519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.420584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.420833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.420897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.421086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.421149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.421382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.421448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.421649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.421713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.421911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.421974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.422331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.694 [2024-11-20 06:40:23.422431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.694 qpair failed and we were unable to recover it. 00:29:51.694 [2024-11-20 06:40:23.422749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.422820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.423121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.423187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.423407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.423477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.423722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.423800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.424051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.424120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.424348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.424415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.424674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.424741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.425034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.425116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.425376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.425444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.425700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.425766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.426013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.426079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.426319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.426403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.426634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.426716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.426978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.427044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.427292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.427381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.427615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.427685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.427971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.428037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.428274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.428360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.428588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.428654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.428901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.428969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.429206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.429272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.429579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.429645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.429866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.429932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.430160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.430230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.430477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.430545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.430804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.430869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.431162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.431235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.431516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.431586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.431841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.431907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.432104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.432169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.432422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.432491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.432770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.432836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.433036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.433104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.433365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.433433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.433682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.433762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.695 [2024-11-20 06:40:23.433999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.695 [2024-11-20 06:40:23.434068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.695 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.434354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.434423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.434639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.434704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.434900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.434965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.435250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.435339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.435539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.435604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.435843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.435908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.436158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.436226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.436541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.436610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.436843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.436909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.437188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.437254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.437571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.437643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.437924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.437990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.438232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.438299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.438575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.438644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.438892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.438960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.439190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.439255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.439519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.439585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.439817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.439882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.440126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.440195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.440453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.440520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.440803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.440869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.441095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.441176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.441468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.441536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.441787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.441853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.442113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.442179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.442408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.442484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.442774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.442841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.443081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.443146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.443366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.443432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.443714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.443788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.444072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.444139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.444382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.444449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.444695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.444759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.444953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.445028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.445262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.445346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.445647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.445712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.445929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.696 [2024-11-20 06:40:23.445997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.696 qpair failed and we were unable to recover it. 00:29:51.696 [2024-11-20 06:40:23.446235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.446346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.446598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.446664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.446912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.446977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.447225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.447290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.447560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.447628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.447920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.447984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.448194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.448273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.448590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.448670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.448986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.449052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.449259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.449347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.449601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.449665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.449892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.449958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.450234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.450300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.450579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.450645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.450898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.450962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.451258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.451357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.451647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.451713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.451927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.451992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.452207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.452272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.452510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.452586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.452886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.452951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.453244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.453330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.453583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.453662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.453918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.453985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.454270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.454375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.454623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.454689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.454937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.455007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.455338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.455407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.455664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.455729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.455942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.456008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.456276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.456369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.456665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.456732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.457012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.457078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.457350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.457419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.457678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.457747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.458035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.458100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.458354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.458422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.697 qpair failed and we were unable to recover it. 00:29:51.697 [2024-11-20 06:40:23.458612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.697 [2024-11-20 06:40:23.458693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.458935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.459003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.459241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.459328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.459584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.459650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.459927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.460003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.460255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.460342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.460636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.460701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.460953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.461019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.461353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.461423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.461679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.461756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.462046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.462111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.462334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.462401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.462709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.462777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.463017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.463082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.463334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.463401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.463644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.463726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.463988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.464054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.464274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.464359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.464568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.464634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.464883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.464951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.465200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.465265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.465563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.465628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.465922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.466003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.466292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.466401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.466663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.466728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.466986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.467051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.467255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.467339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.467661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.467728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.467966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.468030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.468247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.468329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.468588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.468670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.468895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.468960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.469208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.698 [2024-11-20 06:40:23.469273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.698 qpair failed and we were unable to recover it. 00:29:51.698 [2024-11-20 06:40:23.469508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.469574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.469821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.469888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.470135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.470201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.470519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.470586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.470874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.470950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.471229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.471295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.471576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.471643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.471883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.471948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.472209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.472278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.472591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.472659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.472914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.472979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.473167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.473232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.473462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.473546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.473839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.473904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.474148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.474213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.474517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.474583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.474858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.474936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.475145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.475214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.475528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.475596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.475884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.475952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.476191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.476256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.476573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.476639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.476888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.476953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.477145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.477212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.477523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.477591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.477844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.477909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.478110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.478178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.478433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.478502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.478719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.478785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.479039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.479105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.479371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.479439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.479668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.479736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.480018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.480082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.480367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.480434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.480663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.480731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.481003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.481069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.481361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.481427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.699 [2024-11-20 06:40:23.481667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.699 [2024-11-20 06:40:23.481732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.699 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.481991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.482071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.482296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.482398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.482659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.482724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.482971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.483036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.483287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.483386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.483622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.483687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.483938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.484005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.484244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.484327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.484622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.484690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.484925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.484990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.485242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.485326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.485573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.485651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.485918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.485984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.486240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.486322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.486613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.486678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.486890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.486970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.487273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.487359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.487595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.487661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.487884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.487959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.488239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.488332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.488557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.488622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.488830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.488895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.489080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.489144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.489387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.489457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.489737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.489802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.490030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.490095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.490366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.490433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.700 [2024-11-20 06:40:23.490690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.700 [2024-11-20 06:40:23.490774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.700 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.491015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.491084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.491338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.491405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.491646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.491712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.491958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.492030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.492323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.492411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.492632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.492699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.492909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.492976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.493227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.493326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.493550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.493616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.493842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.493907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.494121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.494186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.494422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.494498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.494752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.494817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.495048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.495113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.495412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.495479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.495741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.495809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.496090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.496156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.496414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.496481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.496726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.496791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.497074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.497142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.497360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.497426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.497674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.497741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.498038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.981 [2024-11-20 06:40:23.498105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.981 qpair failed and we were unable to recover it. 00:29:51.981 [2024-11-20 06:40:23.498420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.498489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.498726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.498791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.499031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.499096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.499380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.499465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.499762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.499827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.500053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.500121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.500354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.500420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.500646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.500732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.501014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.501080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.501333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.501400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.501639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.501706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.501976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.502044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.502264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.502366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.502627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.502692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.502984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.503049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.503325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.503394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.503638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.503704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.503984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.504050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.504254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.504339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.504620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.504687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.504974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.505039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.505263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.505349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.505618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.505699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.505973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.506039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.506333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.506399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.506677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.506743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.506971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.507039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.507329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.507397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.507645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.507710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.507918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.507983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.508235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.508320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.508589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.508655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.508896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.508960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.509166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.509231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.509508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.509577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.509825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.509891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.510136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.510203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.982 [2024-11-20 06:40:23.510458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.982 [2024-11-20 06:40:23.510525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.982 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.510837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.510906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.511186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.511252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.511523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.511588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.511795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.511861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.512083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.512153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.512435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.512502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.512756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.512821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.513081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.513149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.513436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.513503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.513757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.513836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.514040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.514106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.514333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.514413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.514669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.514735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.514953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.515019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.515273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.515358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.515582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.515649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.515876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.515942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.516154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.516220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.516482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.516551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.516808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.516873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.517129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.517197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.517473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.517539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.517791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.517855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.518081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.518154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.518443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.518512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.518763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.518828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.519071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.519137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.519358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.519428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.519709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.519774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.519990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.520055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.520297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.520381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.520606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.520672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.520904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.520970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.521217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.521282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.521545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.521609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.521867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.521932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.522176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.983 [2024-11-20 06:40:23.522244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.983 qpair failed and we were unable to recover it. 00:29:51.983 [2024-11-20 06:40:23.522502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.522568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.522770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.522839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.523126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.523207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.523514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.523583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.523834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.523901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.524158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.524224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.524464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.524548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.524804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.524871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.525086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.525152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.525392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.525459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.525709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.525780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.526020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.526087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.526372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.526452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.526739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.526803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.527054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.527126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.527395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.527464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.527700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.527766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.528055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.528120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.528339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.528423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.528729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.528795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.529043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.529108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.529333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.529400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.529652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.529720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.529963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.530028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.530318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.530386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.530619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.530685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.530944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.531013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.531223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.531288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.531611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.531677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.531870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.531935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.532166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.532234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.532529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.532596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.532893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.532957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.533209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.533290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.533553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.533620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.533875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.533939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.534179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.534246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.534513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.984 [2024-11-20 06:40:23.534585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.984 qpair failed and we were unable to recover it. 00:29:51.984 [2024-11-20 06:40:23.534838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.534902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.535166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.535232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.535503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.535569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.535801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.535877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.536139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.536204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.536479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.536547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.536747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.536811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.537043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.537113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.537364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.537432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.537640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.537705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.537958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.538033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.538290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.538377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.538563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.538628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.538850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.538916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.539104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.539180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.539430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.539499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.539794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.539860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.540059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.540126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.540338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.540405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.540652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.540733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.541038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.541102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.541334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.541401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.541642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.541708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.541968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.542034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.542334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.542401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.542625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.542690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.542940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.543006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.543253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.543337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.543640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.543706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.543995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.544060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.544264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.544385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.544674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.544741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.985 [2024-11-20 06:40:23.544990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.985 [2024-11-20 06:40:23.545057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.985 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.545271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.545361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.545680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.545747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.545970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.546036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.546284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.546373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.546623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.546695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.546931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.546997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.547280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.547370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.547620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.547684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.547925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.548000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.548275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.548378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.548601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.548665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.548960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.549025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.549225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.549291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.549586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.549654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.549904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.549969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.550176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.550245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.550469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.550535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.550788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.550863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.551144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.551211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.551491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.551557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.551737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.551802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.552046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.552123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.552398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.552466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.552724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.552790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.553045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.553109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.553365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.553432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.553690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.553758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.554039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.554103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.554340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.554407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.554693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.554776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.986 [2024-11-20 06:40:23.555026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.986 [2024-11-20 06:40:23.555091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.986 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.555322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.555389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.555591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.555661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.555883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.555959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.556193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.556261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.556578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.556645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.556921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.556986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.557222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.557293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.557595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.557660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.557863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.557928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.558172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.558237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.558555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.558633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.558948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.559013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.559253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.559337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.559532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.559597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.559835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.559914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.560140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.560206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.560506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.560572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.560877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.560942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.561205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.561273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.561563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.561631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.561832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.561897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.562148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.562213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.562545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.562613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.562856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.562923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.563220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.563284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.563568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.563649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.563919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.563984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.564209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.564274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.564508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.564574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.564850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.564918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.565143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.565221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.565481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.565547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.565784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.565854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.566135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.566203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.566432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.566500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.987 [2024-11-20 06:40:23.566719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.987 [2024-11-20 06:40:23.566784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.987 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.567058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.567125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.567407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.567475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.567716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.567781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.567978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.568045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.568292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.568382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.568618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.568685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.568876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.568941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.569155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.569220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.569473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.569540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.569824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.569892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.570087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.570153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.570354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.570421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.570653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.570719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.570956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.571027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.571334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.571401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.571615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.571680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.571931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.571997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.572282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.572367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.572615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.572680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.572904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.572969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.573246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.573327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.573575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.573652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.573861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.573927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.574220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.574285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.574560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.574625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.574857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.574925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.575169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.575234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.575511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.575579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.575856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.575923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.576167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.576234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.576499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.576566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.576824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.576892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.577104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.577170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.577395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.577464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.577752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.577828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.578079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.988 [2024-11-20 06:40:23.578144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.988 qpair failed and we were unable to recover it. 00:29:51.988 [2024-11-20 06:40:23.578389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.578463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.578725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.578792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.578995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.579059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.579333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.579399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.579679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.579757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.580003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.580067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.580274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.580360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.580621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.580688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.580974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.581040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.581359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.581428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.581680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.581744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.581980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.582044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.582287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.582389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.582671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.582736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.582987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.583051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.583260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.583356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.583601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.583668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.583949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.584015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.584272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.584362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.584605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.584687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.584981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.585047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.585300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.585396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.585611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.585677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.585967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.586038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.586298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.586387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.586645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.586712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.586914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.586979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.587272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.587363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.587607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.587674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.587935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.588002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.588246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.588339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.588655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.588721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.588977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.589046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.589289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.589377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.589590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.589666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.589975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.590042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.590294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.989 [2024-11-20 06:40:23.590401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.989 qpair failed and we were unable to recover it. 00:29:51.989 [2024-11-20 06:40:23.590639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.590706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.590984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.591062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.591329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.591397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.591674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.591740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.591985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.592062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.592291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.592381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.592624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.592689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.592938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.593005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.593237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.593323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.593560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.593629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.593879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.593946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.594232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.594298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.594583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.594650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.594893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.594960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.595165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.595232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.595524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.595591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.595870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.595944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.596242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.596330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.596648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.596713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.596961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.597029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.597273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.597365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.597622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.597687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.597923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.597987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.598168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.598232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.598522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.598592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.598895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.598961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.599206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.599271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.599507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.599584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.599855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.599922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.600113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.990 [2024-11-20 06:40:23.600177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.990 qpair failed and we were unable to recover it. 00:29:51.990 [2024-11-20 06:40:23.600477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.600544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.600808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.600876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.601192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.601256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.601505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.601572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.601817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.601883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.602092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.602158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.602442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.602510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.602755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.602821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.603108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.603172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.603412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.603498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.603756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.603822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.604018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.604082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.604342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.604409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.604608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.604684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.604937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.605006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.605215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.605281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.605509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.605574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.605832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.605899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.606120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.606185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.606425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.606491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.606698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.606762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.606999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.607067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.607343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.607411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.607670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.607735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.608016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.608082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.608344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.608411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.608659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.608726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.609014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.609078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.609337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.609406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.609658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.609738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.610041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.610107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.610342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.610409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.610643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.610709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.610985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.611052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.611268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.611355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.611607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.611673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.611923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.611990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.612250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.991 [2024-11-20 06:40:23.612338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.991 qpair failed and we were unable to recover it. 00:29:51.991 [2024-11-20 06:40:23.612562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.612639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.612922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.612987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.613235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.613300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.613625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.613694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.613990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.614053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.614297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.614390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.614672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.614755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.615019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.615084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.615291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.615387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.615640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.615705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.615957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.616024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.616242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.616330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.616533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.616596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.616837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.616902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.617172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.617239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.617505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.617572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.617817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.617882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.618106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.618170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.618385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.618463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.618695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.618761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.618976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.619040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.619334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.619400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.619673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.619742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.619971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.620036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.620328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.620397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.620636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.620701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.620992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.621060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.621359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.621426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.621682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.621747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.622035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.622107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.622391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.622458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.622713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.622778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.623061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.623126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.623385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.623453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.623739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.623804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.624002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.624069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.624335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.624401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.624663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.992 [2024-11-20 06:40:23.624733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.992 qpair failed and we were unable to recover it. 00:29:51.992 [2024-11-20 06:40:23.625015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.625080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.625316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.625386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.625666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.625756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.626055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.626121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.626366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.626433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.626715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.626779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.627039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.627109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.627329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.627397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.627659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.627724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.627968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.628033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.628281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.628370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.628656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.628720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.628933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.628998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.629276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.629370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.629660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.629727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.629965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.630030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.630335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.630403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.630652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.630720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.631042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.631108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.631297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.631395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.631663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.631728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.631966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.632042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.632329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.632397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.632661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.632726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.633003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.633067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.633374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.633443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.633649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.633714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.634007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.634071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.634268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.634368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.634715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.634782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.635028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.635092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.635364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.635431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.635663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.635730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.635996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.636065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.636353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.636420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.636625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.636692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.636981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.637047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.993 [2024-11-20 06:40:23.637274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.993 [2024-11-20 06:40:23.637360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.993 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.637579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.637645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.637893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.637958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.638207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.638279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.638590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.638655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.638860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.638935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.639158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.639223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.639485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.639552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.639788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.639855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.640062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.640131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.640378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.640444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.640660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.640727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.640946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.641015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.641239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.641323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.641538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.641603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.641802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.641869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.642101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.642169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.642490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.642559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.642853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.642920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.643222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.643287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.643587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.643656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.643914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.643980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.644264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.644349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.644591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.644664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.644899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.644965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.645212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.645277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.645553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.645618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.645869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.645949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.646270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.646360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.646579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.646647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.646950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.647028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.647348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.647417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.647679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.647744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.648003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.648067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.648338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.648405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.648656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.648721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.648975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.994 [2024-11-20 06:40:23.649041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.994 qpair failed and we were unable to recover it. 00:29:51.994 [2024-11-20 06:40:23.649287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.649372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.649615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.649681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.649966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.650031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.650328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.650394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.650630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.650694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.650933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.650998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.651255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.651362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.651598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.651664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.651960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.652036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.652344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.652410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.652647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.652714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.652997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.653062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.653326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.653393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.653649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.653714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.653956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.654021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.654322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.654389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.654672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.654737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.655033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.655097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.655383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.655451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.655745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.655810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.656042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.656107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.656348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.656415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.656628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.656696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.656950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.657016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.657272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.657357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.657560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.657626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.657831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.657896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.658150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.658214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.658517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.658584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.658829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.658893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.659134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.659199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.659508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.659576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.659775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.995 [2024-11-20 06:40:23.659841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.995 qpair failed and we were unable to recover it. 00:29:51.995 [2024-11-20 06:40:23.660100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.660165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.660460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.660527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.660740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.660806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.661043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.661108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.661331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.661397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.661638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.661703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.661983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.662049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.662256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.662334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.662530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.662594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.662844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.662908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.663146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.663210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.663509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.663576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.663838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.663903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.664142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.664206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.664479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.664546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.664826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.664902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.665138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.665202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.665475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.665541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.665745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.665810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.666052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.666117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.666410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.666475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.666666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.666732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.667011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.667076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.667358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.667424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.667670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.667735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.668016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.668080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.668335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.668402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.668673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.668737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.668984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.669048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.669338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.669404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.669661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.669727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.670004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.670069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.670355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.670423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.670720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.670784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.671070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.671135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.671382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.671450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.671734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.996 [2024-11-20 06:40:23.671798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.996 qpair failed and we were unable to recover it. 00:29:51.996 [2024-11-20 06:40:23.672036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.672101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.672362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.672430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.672718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.672782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.673041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.673106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.673380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.673447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.673740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.673805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.674010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.674078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.674295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.674374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.674669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.674734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.675036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.675101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.675351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.675417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.675686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.675751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.675958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.676026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.676324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.676390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.676641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.676707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.676951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.677018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.677264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.677344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.677559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.677624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.677902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.677977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.678256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.678334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.678571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.678636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.678926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.678991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.679250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.679326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.679575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.679639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.679921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.679985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.680233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.680300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.680570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.680634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.680858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.680923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.681202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.681267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.681526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.681591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.681855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.681920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.682123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.682190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.682465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.682530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.682825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.682890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.683132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.683197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.683428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.683494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.683765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.997 [2024-11-20 06:40:23.683829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.997 qpair failed and we were unable to recover it. 00:29:51.997 [2024-11-20 06:40:23.684031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.684095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.684392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.684458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.684696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.684761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.685056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.685121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.685336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.685404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.685680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.685745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.686006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.686071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.686266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.686344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.686603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.686668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.686921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.686985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.687279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.687358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.687617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.687681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.687917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.687981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.688263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.688359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.688607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.688674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.688931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.688995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.689283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.689365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.689618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.689685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.689925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.689990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.690231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.690296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.690591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.690655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.690859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.690935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.691219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.691283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.691577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.691642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.691883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.691948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.692219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.692283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.692598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.692663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.692890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.692954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.693142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.693208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.693471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.693537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.693783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.693847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.694093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.694157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.694363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.694430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.694628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.694692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.694973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.695038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.695336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.695404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.695685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.998 [2024-11-20 06:40:23.695748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.998 qpair failed and we were unable to recover it. 00:29:51.998 [2024-11-20 06:40:23.696000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.696065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.696339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.696405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.696656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.696720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.696957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.697021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.697316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.697383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.697620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.697685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.697925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.697990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.698276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.698355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.698650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.698714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.698989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.699054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.699315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.699382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.699682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.699782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.700017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.700086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.700339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.700428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.700702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.700769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.701050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.701114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.701392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.701459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.701705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.701770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.702061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.702125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.702400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.702466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.702754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.702818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.703081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.703148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.703441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.703508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.703761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.703830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.704079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.704143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.704416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.704482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.704770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.704835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.705095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.705159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.705416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.705481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.705762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.705825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.706106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.706170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.706405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.706470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.706699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.706763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.707005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.707070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.707322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.707388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.707655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.707718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.707961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.708025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.708259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.708341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:51.999 qpair failed and we were unable to recover it. 00:29:51.999 [2024-11-20 06:40:23.708587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:51.999 [2024-11-20 06:40:23.708651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.708933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.708997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.709239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.709334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.709591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.709655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.709902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.709967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.710220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.710283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.710529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.710594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.710802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.710867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.711113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.711177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.711420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.711486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.711762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.711826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.712109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.712175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.712424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.712491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.712747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.712811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.713055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.713120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.713370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.713398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.713510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.713535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.713675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.713701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.713814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.713840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.713920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.713946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.714059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.714085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.714162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.714188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.714280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.714316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.714429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.714455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.714542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.714569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.714715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.714747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.714888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.714921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.715034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.715078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.715239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.715271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.715385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.715418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.715568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.715596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.715677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.715703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.715791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.715817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.715931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.000 [2024-11-20 06:40:23.715957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.000 qpair failed and we were unable to recover it. 00:29:52.000 [2024-11-20 06:40:23.716062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.716094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.716277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.716310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.716423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.716449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.716535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.716561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.716643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.716669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.716787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.716818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.716861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x206df30 (9): Bad file descriptor 00:29:52.001 [2024-11-20 06:40:23.717041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.717095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.717212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.717245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.717351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.717384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.717506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.717532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.717638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.717664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.717783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.717817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.717963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.717998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.718180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.718210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.718331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.718363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.718490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.718521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.718652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.718682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.718786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.718818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.718958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.718983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.719071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.719097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.719206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.719236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.719457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.719486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.719578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.719604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.719722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.719748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.719840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.719886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.720019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.720050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.720217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.720243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.720361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.720389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.720530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.720561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.720666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.720697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.720797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.720829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.720957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.720986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.721137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.721203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.721423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.721456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.721618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.721682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.721966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.722030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.722280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.722332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.001 [2024-11-20 06:40:23.722508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.001 [2024-11-20 06:40:23.722539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.001 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.722730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.722794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.722901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.722935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.723129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.723162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.723441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.723473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.723576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.723608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.723801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.723865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.724061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.724116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.724328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.724355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.724442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.724468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.724608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.724639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.724792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.724824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.725007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.725041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.725392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.725434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.725563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.725602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.725701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.725728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.725872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.725938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.726215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.726281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.726473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.726504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.726693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.726726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.726903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.726936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.727151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.727177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.727286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.727322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.727420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.727446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.727547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.727595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.727766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.727831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.728077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.728142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.728335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.002 [2024-11-20 06:40:23.728362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.002 qpair failed and we were unable to recover it. 00:29:52.002 [2024-11-20 06:40:23.728478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.728510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.728622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.728656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.728756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.728789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.729006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.729071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.729330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.729357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.729448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.729474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.729563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.729589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.729700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.729745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.729862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.729896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.730037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.730071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.730222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.730248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.730348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.730375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.730463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.730513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.730668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.730719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.730806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.730831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.730957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.730983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.731088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.731121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.731269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.731294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.731383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.731409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.731492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.731517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.731627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.731652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.731736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.731761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.731850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.731879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.732063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.732126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.732391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.732438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.732538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.732568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.732771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.732797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.732904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.732929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.733061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.733124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.733363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.733393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.733527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.733558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.733690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.003 [2024-11-20 06:40:23.733720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.003 qpair failed and we were unable to recover it. 00:29:52.003 [2024-11-20 06:40:23.734013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.734078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.734316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.734349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.734454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.734485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.734631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.734663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.734913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.734948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.735081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.735107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.735198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.735223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.735352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.735392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.735510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.735543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.735803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.735869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.736115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.736181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.736403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.736435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.736540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.736571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.736843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.736869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.736973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.736998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.737140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.737174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.737382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.737414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.737559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.737590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.737686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.737711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.737793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.737818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.737906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.737933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.738062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.738095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.738336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.738384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.738490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.738520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.738621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.738652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.738751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.738782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.738914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.738945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.739164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.739229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.739461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.739493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.739595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.739625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.739755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.739786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.740074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.740108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.740246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.740280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.740416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.740447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.740544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.004 [2024-11-20 06:40:23.740575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.004 qpair failed and we were unable to recover it. 00:29:52.004 [2024-11-20 06:40:23.740876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.740942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.741186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.741250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.741485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.741516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.741619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.741651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.741774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.741806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.741940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.742004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.742213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.742272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.742517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.742550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.742723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.742784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.742980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.743079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.743363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.743396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.743528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.743560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.743746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.743779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.743911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.743955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.744170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.744196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.744351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.744382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.744493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.744524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.744712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.744772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.744954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.745014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.745236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.745262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.745370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.745396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.745504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.745535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.745630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.745666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.745791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.745822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.746026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.746087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.746313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.005 [2024-11-20 06:40:23.746339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.005 qpair failed and we were unable to recover it. 00:29:52.005 [2024-11-20 06:40:23.746451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.746478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.746561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.746587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.746691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.746717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.746830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.746856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.747022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.747085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.747321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.747381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.747485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.747517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.747749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.747810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.748031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.748057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.748168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.748193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.748313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.748340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.748431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.748458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.748626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.748652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.748789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.748815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.749034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.749097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.749380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.749412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.749554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.749581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.749699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.749724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.749811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.749867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.750042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.750103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.750347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.750380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.750493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.750523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.750696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.750754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.750962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.751021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.751229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.751286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.751528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.751589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.751782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.751842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.752031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.752094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.752361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.752422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.752603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.752653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.752773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.752803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.752939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.752969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.753112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.753171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.753379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.753441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.753782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.753843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.006 qpair failed and we were unable to recover it. 00:29:52.006 [2024-11-20 06:40:23.754097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.006 [2024-11-20 06:40:23.754157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.754393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.754464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.754734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.754795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.755056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.755090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.755277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.755349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.755536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.755595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.755736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.755768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.755909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.755942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.756117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.756143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.756222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.756247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.756338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.756365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.756473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.756499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.756635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.756660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.756773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.756799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.756889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.756915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.757129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.757163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.757354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.757416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.757637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.757696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.757951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.758012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.758261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.758295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.758507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.758567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.758748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.758807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.758922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.758956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.759166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.759225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.759445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.759507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.759738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.759798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.760008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.760034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.760146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.760173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.760285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.760315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.760407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.760434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.760542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.760567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.760652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.760678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.760784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.760810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.760891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.760918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.761051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.761105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.007 [2024-11-20 06:40:23.761322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.007 [2024-11-20 06:40:23.761348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.007 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.761460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.761486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.761592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.761627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.761765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.761826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.762050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.762077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.762328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.762390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.762626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.762656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.762768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.762794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.762930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.762964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.763180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.763213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.763351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.763386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.763621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.763652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.763893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.763939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.764022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.764049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.764143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.764168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.764268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.764294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.764393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.764418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.764625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.764684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.764925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.764951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.765032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.765058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.765176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.765224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.765367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.765402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.765505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.765538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.765719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.765744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.765832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.765858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.765970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.765996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.766107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.766133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.766361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.766396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.766537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.766570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.766747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.766801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.766973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.767006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.767268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.767309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.767444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.767478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.767684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.767747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.767979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.768035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.768287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.768318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.768439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.768464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.008 [2024-11-20 06:40:23.768684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.008 [2024-11-20 06:40:23.768709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.008 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.768829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.768855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.768973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.769028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.769197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.769253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.769485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.769511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.769646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.769671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.769761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.769786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.769923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.769948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.770094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.770126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.770233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.770271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.770468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.770526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.770730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.770787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.771011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.771037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.771142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.771167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.771271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.771297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.771407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.771432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.771605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.771666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.771900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.771960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.772181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.772216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.772354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.772389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.772578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.772635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.772827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.772885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.773109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.773164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.773428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.773462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.773563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.773596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.773743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.773775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.773987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.774021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.774199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.774245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.774358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.774383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.774541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.774594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.774765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.774818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.775035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.775083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.775167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.775193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.775337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.775393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.775609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.775663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.009 [2024-11-20 06:40:23.775887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.009 [2024-11-20 06:40:23.775940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.009 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.776175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.776229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.776467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.776494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.776604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.776630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.776821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.776876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.777086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.777142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.777354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.777414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.777648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.777704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.777925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.777952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.778061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.778088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.778314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.778371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.778574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.778630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.778845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.778900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.779060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.779117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.779345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.779410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.779593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.779648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.779832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.779887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.780048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.780102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.780342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.780374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.780509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.780539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.780746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.780773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.780909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.780935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.781038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.781063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.781151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.781176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.781266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.781323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.781538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.781593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.781777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.781832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.782047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.782103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.782363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.782420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.782619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.782645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.782753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.782780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.782917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.782970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.783191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.783245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.783425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.010 [2024-11-20 06:40:23.783480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.010 qpair failed and we were unable to recover it. 00:29:52.010 [2024-11-20 06:40:23.783689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.783743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.783951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.784007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.784252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.784320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.784517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.784549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.784714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.784745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.784928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.784985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.785232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.785288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.785525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.785582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.785752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.785810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.786023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.786078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.786326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.786384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.786639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.786695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.786913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.786968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.787174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.787229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.787441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.787468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.787561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.787586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.787745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.787802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.788011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.788066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.788344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.788402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.788609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.788665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.788880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.788946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.789168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.789201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.789307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.789338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.789456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.789512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.789758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.789812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.789992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.790047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.790247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.790314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.790512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.790565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.790755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.790808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.791053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.791107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.791295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.791361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.791555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.791611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.791832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.791888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.792112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.792167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.792371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.011 [2024-11-20 06:40:23.792427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.011 qpair failed and we were unable to recover it. 00:29:52.011 [2024-11-20 06:40:23.792646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.012 [2024-11-20 06:40:23.792701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.012 qpair failed and we were unable to recover it. 00:29:52.012 [2024-11-20 06:40:23.792908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.012 [2024-11-20 06:40:23.792965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.012 qpair failed and we were unable to recover it. 00:29:52.012 [2024-11-20 06:40:23.793171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.012 [2024-11-20 06:40:23.793226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.012 qpair failed and we were unable to recover it. 00:29:52.012 [2024-11-20 06:40:23.793401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.012 [2024-11-20 06:40:23.793457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.012 qpair failed and we were unable to recover it. 00:29:52.012 [2024-11-20 06:40:23.793671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.012 [2024-11-20 06:40:23.793728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.012 qpair failed and we were unable to recover it. 00:29:52.012 [2024-11-20 06:40:23.793945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.012 [2024-11-20 06:40:23.794001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.012 qpair failed and we were unable to recover it. 00:29:52.012 [2024-11-20 06:40:23.794217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.012 [2024-11-20 06:40:23.794272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.012 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.794508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.794563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.794801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.794857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.795112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.795144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.795278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.795316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.795475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.795532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.795726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.795782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.795987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.796042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.796227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.796283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.796536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.796592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.796808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.796863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.797043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.797098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.797283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.797360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.797549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.797604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.797783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.291 [2024-11-20 06:40:23.797839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.291 qpair failed and we were unable to recover it. 00:29:52.291 [2024-11-20 06:40:23.798054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.798110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.798288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.798360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.798610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.798666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.798848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.798904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.799073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.799140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.799352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.799409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.799623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.799679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.799882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.799938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.800132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.800187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.800339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.800396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.800608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.800663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.800854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.800910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.801120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.801175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.801345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.801398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.801649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.801680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.801783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.801813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.801932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.801987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.802204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.802260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.802449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.802505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.802718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.802775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.802987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.803043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.803300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.803368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.803570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.803626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.803842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.803899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.804113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.804168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.804403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.804460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.804659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.804716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.804928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.804985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.805208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.805264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.805552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.805608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.805822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.805877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.806098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.806154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.806399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.292 [2024-11-20 06:40:23.806457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.292 qpair failed and we were unable to recover it. 00:29:52.292 [2024-11-20 06:40:23.806713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.806769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.806937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.806995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.807214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.807271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.807533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.807563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.807666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.807698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.807843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.807896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.808134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.808166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.808278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.808315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.808470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.808526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.808780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.808836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.808992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.809047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.809249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.809324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.809516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.809572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.809737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.809794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.810007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.810064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.810325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.810381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.810597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.810678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.810902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.810977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.811186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.811241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.811499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.811555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.811748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.811803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.812008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.812082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.812254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.812320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.812502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.812559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.812764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.812819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.813090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.813145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.813355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.813412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.813591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.813646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.813854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.813909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.814065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.814120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.814382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.814438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.814652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.814708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.814914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.814971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.815142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.815198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.815404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.815461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.815690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.815746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.815957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.293 [2024-11-20 06:40:23.816013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.293 qpair failed and we were unable to recover it. 00:29:52.293 [2024-11-20 06:40:23.816249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.816324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.816593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.816678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.816922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.816981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.817178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.817235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.817476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.817534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.817733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.817793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.818054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.818113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.818337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.818394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.818613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.818669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.818900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.818959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.819284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.819375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.819564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.819620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.819810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.819888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.820157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.820188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.820279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.820318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.820436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.820467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.820634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.820690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.820970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.821029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.821255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.821331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.821578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.821634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.821885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.821944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.822125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.822183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.822421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.822478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.822709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.822770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.822962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.823021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.823323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.823398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.823585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.823661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.823898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.823957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.824257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.824347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.824590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.824649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.824855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.824914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.825138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.825197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.825424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.825484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.825694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.825753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.825980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.826042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.826256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.826328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.826529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.826592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.826820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.294 [2024-11-20 06:40:23.826879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.294 qpair failed and we were unable to recover it. 00:29:52.294 [2024-11-20 06:40:23.827145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.827205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.827439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.827502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.827732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.827791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.828061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.828121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.828386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.828448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.828682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.828742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.829002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.829062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.829374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.829435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.829704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.829763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.829987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.830049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.830323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.830384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.830698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.830729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.830884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.830915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.831066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.831125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.831343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.831403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.831618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.831679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.831900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.831959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.832231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.832337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.832612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.832643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.832746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.832776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.832987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.833047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.833236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.833296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.833558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.833621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.833889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.833949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.834150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.834183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.834322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.834355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.834532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.834591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.834806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.834865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.835095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.835154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.835393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.835459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.835738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.835801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.836076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.836139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.836377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.836442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.836651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.836710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.836936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.836967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.295 [2024-11-20 06:40:23.837066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.295 [2024-11-20 06:40:23.837096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.295 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.837260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.837332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.837571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.837630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.837854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.837912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.838111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.838169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.838350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.838413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.838635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.838697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.838908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.838967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.839149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.839211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.839479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.839550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.839823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.839887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.840085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.840152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.840425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.840486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.840705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.840764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.840987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.841055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.841364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.841431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.841641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.841705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.841956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.842016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.842219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.842277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.842537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.842597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.842832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.842892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.843120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.843179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.843404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.843465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.843705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.843766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.844010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.844069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.844294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.844366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.844625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.844683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.844896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.844956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.845220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.845279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.845525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.296 [2024-11-20 06:40:23.845584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.296 qpair failed and we were unable to recover it. 00:29:52.296 [2024-11-20 06:40:23.845841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.845901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.846143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.846207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.846450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.846515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.846756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.846820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.847088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.847152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.847392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.847453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.847693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.847752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.847998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.848057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.848320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.848386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.848622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.848686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.848927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.848990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.849228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.849293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.849560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.849624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.849905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.849968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.850174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.850239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.850508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.850574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.850775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.850838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.851047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.851114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.851402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.851469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.851675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.851739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.851953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.852019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.852300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.852381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.852633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.852697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.852897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.852961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.853182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.853263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.853566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.853632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.853919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.853983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.854255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.854351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.854638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.854702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.854941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.855006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.855284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.855369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.855575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.855639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.855883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.855951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.856226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.856291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.856626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.856691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.856891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.856956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.857202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.857265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.857535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.297 [2024-11-20 06:40:23.857600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.297 qpair failed and we were unable to recover it. 00:29:52.297 [2024-11-20 06:40:23.857879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.857944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.858221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.858286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.858562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.858626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.858858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.858921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.859202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.859267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.859539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.859604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.859818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.859881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.860121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.860185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.860465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.860532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.860779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.860855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.861131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.861196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.861458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.861525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.861714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.861785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.862002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.862066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.862257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.862337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.862595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.862659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.862920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.862984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.863233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.863298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.863569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.863634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.863865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.863929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.864177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.864243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.864520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.864586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.864846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.864912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.865161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.865225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.865519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.865584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.865823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.865888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.866113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.866178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.866430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.866496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.866669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.866734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.866968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.867032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.867281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.867359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.867564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.867632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.867892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.867956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.868178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.868242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.868531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.868598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.298 [2024-11-20 06:40:23.868838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.298 [2024-11-20 06:40:23.868903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.298 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.869138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.869214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.869479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.869544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.869832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.869896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.870171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.870235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.870509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.870573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.870862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.870926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.871171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.871235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.871517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.871582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.871867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.871931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.872155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.872219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.872445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.872510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.872798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.872862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.873104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.873168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.873373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.873438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.873732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.873796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.873998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.874061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.874321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.874387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.874605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.874669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.874908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.874971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.875192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.875256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.875527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.875591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.875804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.875869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.876129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.876193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.876432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.876497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.876782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.876847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.877124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.877187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.877437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.877502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.877778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.877843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.878130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.878194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.878447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.878511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.878771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.878836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.879032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.879099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.879357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.879420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.879624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.299 [2024-11-20 06:40:23.879688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.299 qpair failed and we were unable to recover it. 00:29:52.299 [2024-11-20 06:40:23.879916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.879980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.880226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.880292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.880570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.880635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.880923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.880988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.881235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.881299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.881547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.881611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.881891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.881954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.882178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.882246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.882549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.882614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.882912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.882976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.883230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.883294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.883619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.883683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.883922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.883985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.884199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.884266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.884518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.884583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.884827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.884892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.885090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.885157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.885369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.885437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.885715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.885780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.886020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.886083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.886293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.886371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.886627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.886691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.886939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.887003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.887264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.887341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.887552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.887618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.887895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.887959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.888167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.888230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.888506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.888572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.888791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.888856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.889103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.889167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.889463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.889529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.889822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.889887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.890130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.890193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.890442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.890507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.890755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.890833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.891090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.891154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.891399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.891465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.300 [2024-11-20 06:40:23.891687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.300 [2024-11-20 06:40:23.891751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.300 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.891994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.892058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.892257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.892353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.892618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.892685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.892910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.892975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.893194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.893258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.893490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.893557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.893812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.893877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.894125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.894189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.894404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.894468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.894694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.894758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.894970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.895036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.895325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.895389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.895635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.895699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.895952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.896015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.896200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.896267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.896553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.896618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.896896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.896961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.897216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.897279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.897544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.897608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.897818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.897883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.898127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.898191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.898467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.898533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.898776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.898839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.899024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.899106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.899328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.899393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.899652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.899716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.899967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.900031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.900280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.900377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.900580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.301 [2024-11-20 06:40:23.900644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.301 qpair failed and we were unable to recover it. 00:29:52.301 [2024-11-20 06:40:23.900894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.900958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.901198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.901262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.901567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.901631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.901854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.901918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.902194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.902258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.902554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.902619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.902846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.902910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.903103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.903167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.903393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.903458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.903732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.903797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.904027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.904091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.904317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.904382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.904590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.904655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.904890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.904954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.905243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.905321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.905617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.905681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.905927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.905991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.906192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.906255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.906516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.906582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.906838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.906902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.907121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.907185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.907440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.907517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.907753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.907818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.908073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.908137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.908356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.908447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.908703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.908769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.908974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.909038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.909220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.909286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.909549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.909613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.909906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.909970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.910226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.910290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.910557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.910622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.910899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.910963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.911200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.911264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.911565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.911631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.911898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.911965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.912213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.302 [2024-11-20 06:40:23.912278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.302 qpair failed and we were unable to recover it. 00:29:52.302 [2024-11-20 06:40:23.912515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.912580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.912799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.912863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.913120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.913184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.913430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.913498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.913760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.913824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.914026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.914089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.914341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.914408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.914652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.914716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.914958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.915021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.915297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.915374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.915590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.915654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.915937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.916002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.916257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.916355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.916642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.916706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.916912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.916976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.917215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.917279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.917550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.917614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.917820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.917884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.918162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.918226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.918499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.918563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.918809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.918873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.919111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.919174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.919422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.919487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.919686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.919750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.919995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.920062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.920331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.920398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.920598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.920662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.920900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.920963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.921228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.921292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.921560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.921625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.921825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.921891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.922107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.303 [2024-11-20 06:40:23.922173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.303 qpair failed and we were unable to recover it. 00:29:52.303 [2024-11-20 06:40:23.922452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.922518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.922765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.922828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.923077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.923140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.923408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.923474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.923721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.923788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.924045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.924110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.924351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.924416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.924682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.924746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.925026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.925091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.925375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.925441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.925685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.925749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.926009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.926073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.926327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.926395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.926624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.926688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.926943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.927007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.927205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.927268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.927485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.927550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.927777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.927844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.928092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.928157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.928380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.928447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.928690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.928766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.929014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.929077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.929335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.929401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.929599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.929663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.929899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.929962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.930242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.930320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.930606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.930672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.930909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.930974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.931174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.931238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.931543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.931609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.931852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.931916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.932156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.932220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.932548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.932614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.932867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.932931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.933148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.933211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.933495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.933561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.933829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.304 [2024-11-20 06:40:23.933894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.304 qpair failed and we were unable to recover it. 00:29:52.304 [2024-11-20 06:40:23.934074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.934138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.934354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.934419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.934657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.934722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.934925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.934992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.935194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.935259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.935489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.935553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.935801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.935866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.936118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.936181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.936421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.936486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.936695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.936759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.936998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.937071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.937341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.937407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.937657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.937721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.937990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.938053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.938295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.938378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.938642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.938706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.938954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.939018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.939229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.939292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.939555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.939619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.939871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.939935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.940181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.940245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.940535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.940600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.940847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.940910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.941147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.941211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.941477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.941542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.941782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.941846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.942041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.942104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.942337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.942404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.942649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.942713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.942965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.943028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.305 [2024-11-20 06:40:23.943246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.305 [2024-11-20 06:40:23.943324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.305 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.943544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.943610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.943815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.943878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.944107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.944171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.944388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.944453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.944743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.944808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.945052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.945118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.945409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.945476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.945769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.945835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.946031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.946095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.946293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.946377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.946676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.946741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.947014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.947078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.947363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.947428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.947632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.947699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.947920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.947985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.948270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.948346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.948557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.948620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.948810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.948874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.949124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.949188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.949433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.949499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.949762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.949826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.950061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.950125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.950341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.950406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.306 qpair failed and we were unable to recover it. 00:29:52.306 [2024-11-20 06:40:23.950650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.306 [2024-11-20 06:40:23.950714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.950956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.951020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.951236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.951299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.951569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.951633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.951809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.951873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.952066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.952131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.952372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.952438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.952672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.952737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.952980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.953045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.953295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.953375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.953655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.953719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.953985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.954050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.954258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.954335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.954582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.954646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.954883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.954948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.955194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.955258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.955498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.955562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.955777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.955841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.956082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.956149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.956390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.956457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.956704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.956768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.957062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.957127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.957415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.957481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.957730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.957793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.958018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.958093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.958299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.958379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.958587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.958650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.958880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.958944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.959183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.959248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.959507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.959572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.959816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.959881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.307 qpair failed and we were unable to recover it. 00:29:52.307 [2024-11-20 06:40:23.960120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.307 [2024-11-20 06:40:23.960185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.960400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.960468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.960678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.960745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.961030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.961097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.961338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.961405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.961626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.961693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.961967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.962032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.962297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.962383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.962592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.962656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.962897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.962962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.963191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.963255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.963526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.963590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.963824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.963887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.964121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.964184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.964448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.964514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.964805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.964869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.965147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.965211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.965479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.965544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.965800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.965864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.966087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.966151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.966370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.966446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.966707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.966772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.967013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.967077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.967325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.967391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.967603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.967667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.967903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.967966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.968245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.968345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.968599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.968663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.968904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.968969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.969214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.969279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.969548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.969612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.308 [2024-11-20 06:40:23.969884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.308 [2024-11-20 06:40:23.969949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.308 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.970150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.970214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.970489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.970554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.970797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.970861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.971073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.971140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.971398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.971464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.971675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.971742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.971979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.972045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.972331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.972397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.972607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.972671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.972920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.972984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.973223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.973286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.973538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.973603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.973830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.973894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.974090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.974156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.974395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.974461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.974711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.974786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.975068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.975131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.975429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.975495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.975732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.975795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.976061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.976125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.976374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.976440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.976678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.976742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.976996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.977061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.977278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.977364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.977571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.977636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.977843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.977910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.978167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.978231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.978490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.978556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.309 [2024-11-20 06:40:23.978837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.309 [2024-11-20 06:40:23.978902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.309 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.979194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.979258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.979528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.979592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.979872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.979937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.980195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.980259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.980538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.980604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.980893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.980959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.981175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.981239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.981544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.981610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.981843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.981906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.982128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.982192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.982433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.982499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.982745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.982809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.983060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.983124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.983388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.983454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.983679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.983743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.983996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.984061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.984364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.984430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.984679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.984743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.984990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.985055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.985329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.985394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.985652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.985716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.986000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.986065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.986300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.986377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.986620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.986684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.986982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.987047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.987256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.987338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.987557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.987622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.987917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.987983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.988218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.988282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.310 [2024-11-20 06:40:23.988598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.310 [2024-11-20 06:40:23.988663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.310 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.988941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.989005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.989254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.989337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.989634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.989699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.989940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.990004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.990244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.990323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.990574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.990638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.990883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.990947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.991172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.991235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.991507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.991573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.991822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.991886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.992174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.992238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.992548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.992614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.992895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.992959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.993214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.993278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.993537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.993601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.993889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.993953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.994256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.994338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.994627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.994692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.994930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.994998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.995247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.995327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.995572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.995635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.995838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.995902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.996182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.996246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.996531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.996597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.996851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.996928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.997168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.997232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.997496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.997562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.311 [2024-11-20 06:40:23.997814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.311 [2024-11-20 06:40:23.997879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.311 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:23.998171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:23.998234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:23.998462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:23.998527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:23.998776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:23.998843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:23.999033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:23.999096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:23.999360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:23.999426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:23.999671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:23.999735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:23.999972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.000036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.000282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.000363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.000611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.000675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.000918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.000986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.001239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.001319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.001535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.001598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.001839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.001904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.002178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.002242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.002540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.002605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.002865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.002930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.003126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.003191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.003413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.003478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.003722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.003786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.003995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.004061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.004358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.004423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.004624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.004688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.004900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.004967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.005167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.005246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.005519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.005585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.005792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.312 [2024-11-20 06:40:24.005856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.312 qpair failed and we were unable to recover it. 00:29:52.312 [2024-11-20 06:40:24.006109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.006173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.006431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.006497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.006784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.006847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.007129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.007194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.007425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.007490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.007739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.007802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.008051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.008115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.008331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.008399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.008685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.008749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.008962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.009026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.009275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.009354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.009618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.009682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.009919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.009983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.010207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.010272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.010580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.010644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.010847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.010914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.011138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.011204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.011470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.011535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.011785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.011850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.012089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.012154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.012436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.012502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.012757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.012822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.013060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.013122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.013368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.013434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.013669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.013734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.014004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.014068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.014360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.014426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.014708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.014773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.015007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.015071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.015314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.015379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.015594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.015661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.015879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.015943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.016222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.313 [2024-11-20 06:40:24.016286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.313 qpair failed and we were unable to recover it. 00:29:52.313 [2024-11-20 06:40:24.016556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.016623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.016844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.016907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.017141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.017208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.017497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.017563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.017811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.017875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.018177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.018275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.018547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.018618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.018889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.018971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.019190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.019256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.019539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.019609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.019887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.019953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.020189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.020271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.020588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.020656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.020911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.020977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.021232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.021298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.021557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.021626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.021877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.021942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.022189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.022254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.022581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.022664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.022947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.023012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.023224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.023292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.023588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.023653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.023869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.023949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.024197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.024261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.024512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.024579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.024817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.024883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.025131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.025214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.025498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.025567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.025849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.025915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.026217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.026301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.026589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.026653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.026911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.026976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.027253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.027340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.027612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.027680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.027941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.028007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.028238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.314 [2024-11-20 06:40:24.028326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.314 qpair failed and we were unable to recover it. 00:29:52.314 [2024-11-20 06:40:24.028591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.028662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.028962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.029028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.029241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.029329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.029546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.029611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.029867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.029947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.030220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.030288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.030581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.030647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.030895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.030960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.031231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.031299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.031613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.031679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.031923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.031991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.032251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.032339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.032615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.032684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.032933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.032998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.033272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.033362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.033621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.033697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.033900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.033966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.034198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.034264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.034511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.034578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.034822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.034890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.035151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.035218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.035433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.035500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.035712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.035789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.036011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.036089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.036412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.036480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.036738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.036803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.037055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.037120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.037391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.037461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.037703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.037768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.315 [2024-11-20 06:40:24.038005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.315 [2024-11-20 06:40:24.038072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.315 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.038356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.038424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.038693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.038761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.039043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.039109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.039355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.039423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.039655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.039734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.040001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.040068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.040284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.040371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.040573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.040637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.040895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.040961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.041188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.041255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.041499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.041564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.041774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.041839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.042083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.042150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.042382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.042462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.042675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.042740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.043004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.043070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.043352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.043420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.043687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.043755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.044015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.044084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.044363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.044430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.044676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.044759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.045027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.045094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.045353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.045422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.045637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.045705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.045962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.046047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.046263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.046361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.046587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.046655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.046901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.046966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.047225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.047296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.047599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.047666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.047902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.047967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.048247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.048330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.048628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.048708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.316 qpair failed and we were unable to recover it. 00:29:52.316 [2024-11-20 06:40:24.048956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.316 [2024-11-20 06:40:24.049021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.049272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.049358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.049651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.049727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.050000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.050066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.050287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.050370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.050625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.050690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.050941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.051009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.051325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.051393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.051601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.051666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.051920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.051986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.052281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.052374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.052597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.052662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.052941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.053006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.053298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.053389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.053658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.053724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.053962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.054027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.054287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.054391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.054682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.054752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.055021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.055088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.055358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.055426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.055671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.055737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.055967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.056048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.056316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.056386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.056636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.056704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.056995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.057061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.317 [2024-11-20 06:40:24.057332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.317 [2024-11-20 06:40:24.057402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.317 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.057654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.057731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.057970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.058036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.058334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.058417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.058658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.058723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.059003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.059068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.059363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.059431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.059671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.059741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.059984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.060052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.060326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.060395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.060615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.060679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.060967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.061034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.061256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.061342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.061560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.061627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.061903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.061971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.062228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.062297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.062596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.062662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.062940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.063006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.063294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.063384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.063646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.063712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.063995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.064059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.064315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.064385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.064691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.064758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.064967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.065033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.065335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.065402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.065620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.065700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.065956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.066022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.066260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.066345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.066620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.066687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.066955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.067022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.067275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.067360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.318 qpair failed and we were unable to recover it. 00:29:52.318 [2024-11-20 06:40:24.067648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.318 [2024-11-20 06:40:24.067715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.067959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.068024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.068252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.068336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.068589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.068655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.068946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.069011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.069287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.069383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.069663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.069730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.070026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.070090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.070362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.070430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.070688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.070756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.071048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.071125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.071423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.071491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.071749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.071832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.072100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.072166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.072401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.072469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.072712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.072777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.073065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.073133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.073346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.073414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.073701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.073767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.074044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.074124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.074412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.074481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.074722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.074790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.075034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.075098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.075315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.075403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.075667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.075733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.075971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.076039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.076325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.076392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.076627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.076695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.076949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.077015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.077278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.077365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.077615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.077683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.077941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.078008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.078232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.078298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.078560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.078625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.078861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.078926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.079229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.319 [2024-11-20 06:40:24.079297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.319 qpair failed and we were unable to recover it. 00:29:52.319 [2024-11-20 06:40:24.079532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.079598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.079838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.079904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.080132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.080198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2206318 Killed "${NVMF_APP[@]}" "$@" 00:29:52.320 [2024-11-20 06:40:24.080489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.080558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.080758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.080826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:29:52.320 [2024-11-20 06:40:24.081103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.081171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:29:52.320 [2024-11-20 06:40:24.081426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.081504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.081754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:52.320 [2024-11-20 06:40:24.081822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:29:52.320 [2024-11-20 06:40:24.082107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.082174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.320 [2024-11-20 06:40:24.082439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.082507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.082742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.082811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.083042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.083109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.083338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.083407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.083599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.083664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.083934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.084014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.084225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.084291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.084560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.084626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.084871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.084936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.085165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.085247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.085488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.085554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.085800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.085867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.086119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.086184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.086449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.086529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=2206869 00:29:52.320 [2024-11-20 06:40:24.086783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:29:52.320 [2024-11-20 06:40:24.086851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 2206869 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # '[' -z 2206869 ']' 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # local max_retries=100 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:52.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # xtrace_disable 00:29:52.320 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.320 [2024-11-20 06:40:24.090276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.090319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.090440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.090469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.090570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.090601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.090728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.090762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.090866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.090894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.090981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.091007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.091125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.320 [2024-11-20 06:40:24.091151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.320 qpair failed and we were unable to recover it. 00:29:52.320 [2024-11-20 06:40:24.091239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.091265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.091381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.091408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.091516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.091542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.091631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.091657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.091798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.091823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.091914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.091940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.092055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.092081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.092161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.092186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.092277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.092311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.092394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.092421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.092513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.092539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.092652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.092678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.092763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.092789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.092883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.092909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.093024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.093050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.093196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.093222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.093347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.093373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.093467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.093492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.093582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.093607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.093701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.093726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.093805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.093831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.093937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.093978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.094098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.094125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.094245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.094272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.094393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.094421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.094563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.094589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.094681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.094708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.094799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.094827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.094912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.094938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.095020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.095051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.095165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.095191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.095271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.095297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.095406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.095433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.095550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.095577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.095692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.095718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.095796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.095822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.095919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.095948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.321 qpair failed and we were unable to recover it. 00:29:52.321 [2024-11-20 06:40:24.096056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.321 [2024-11-20 06:40:24.096082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.096163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.096190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.096271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.096297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.096415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.096442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.096531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.096558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.096688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.096722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.096852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.096879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.096981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.097013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.097153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.097179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.097292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.097325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.097414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.097440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.097573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.097605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.097735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.097767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.097873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.097908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.098096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.098130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.098264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.098297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.098422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.098448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.098592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.098624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.098767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.098799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.098907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.098942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.099084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.099120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.099229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.099255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.099368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.099395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.099485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.099511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.099617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.099648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.099746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.099773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.099853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.099878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.099982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.100007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.100100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.100127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.100213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.100238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.100326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.100353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.100464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.100490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.100578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.322 [2024-11-20 06:40:24.100609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.322 qpair failed and we were unable to recover it. 00:29:52.322 [2024-11-20 06:40:24.100689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.100715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.100793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.100819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.100903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.100928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.101002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.101028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.101144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.101171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.101291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.101327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.101419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.101447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.101561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.101586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.101670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.101695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.101781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.101807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.101910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.101944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.102058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.102085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.102173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.102200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.102294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.102327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.102456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.102488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.102595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.102638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.102797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.102830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.103004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.103039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.103138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.103164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.103242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.103268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.103380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.103428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.103536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.103569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.103726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.103774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.103910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.103959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.104072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.104098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.104215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.104242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.104365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.104391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.104475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.104501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.104615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.104640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.104745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.104771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.104854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.104879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.104959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.104985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.323 qpair failed and we were unable to recover it. 00:29:52.323 [2024-11-20 06:40:24.105097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.323 [2024-11-20 06:40:24.105122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.105246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.105272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.105388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.105426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.105519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.105547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.105668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.105708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.105804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.105833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.105930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.105957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.106061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.106096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.106214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.106241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.106364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.106392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.106483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.106509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.106615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.106648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.106768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.106803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.106922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.106958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.107071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.107121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.107264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.107294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.107424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.107451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.107544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.107594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.107702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.107735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.107850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.107886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.108000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.108035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.108212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.108239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.108338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.108364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.616 [2024-11-20 06:40:24.108503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.616 [2024-11-20 06:40:24.108530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.616 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.108633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.108666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.108775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.108808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.108911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.108944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.109074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.109106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.109212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.109244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.109388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.109415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.109505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.109531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.109622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.109649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.109827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.109887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.110086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.110150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.110369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.110396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.110503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.110530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.110628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.110655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.110768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.110794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.111040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.111066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.111216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.111242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.111365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.111392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.111488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.111514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.111606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.111632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.111741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.111766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.111915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.111975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.112156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.112218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.112396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.112424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.112541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.112572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.112735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.112805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.113008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.113052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.113253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.113325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.113450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.113476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.113595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.113621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.113743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.113800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.114048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.114106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.114275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.114353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.114474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.114501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.114594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.114619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.114731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.114758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.114896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.114952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.617 [2024-11-20 06:40:24.115116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.617 [2024-11-20 06:40:24.115180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.617 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.115378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.115405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.115522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.115549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.115727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.115785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.115983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.116016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.116128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.116160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.116298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.116344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.116459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.116484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.116569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.116631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.116861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.116918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.117161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.117219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.117420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.117447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.117549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.117606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.117805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.117862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.118049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.118084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.118213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.118239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.118377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.118404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.118496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.118522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.118612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.118638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.118725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.118751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.118828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.118887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.119037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.119097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.119325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.119378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.119492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.119518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.119628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.119654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.119761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.119787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.119963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.120018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.120208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.120274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.120458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.120501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.120678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.120762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.120941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.121017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.121271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.121361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.121481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.121509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.121605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.121631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.121746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.121774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.121984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.122017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.122169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.618 [2024-11-20 06:40:24.122202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.618 qpair failed and we were unable to recover it. 00:29:52.618 [2024-11-20 06:40:24.122355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.122383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.122474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.122500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.122614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.122642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.122724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.122752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.122938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.122972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.123193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.123226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.123333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.123382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.123495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.123522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.123682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.123715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.123894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.123951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.124182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.124239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.124442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.124499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.124690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.124746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.124960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.125016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.125230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.125290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.125559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.125616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.125867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.125924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.126120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.126197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.126410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.126477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.126665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.126721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.126933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.126988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.127188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.127249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.127514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.127573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.127750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.127808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.128004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.128060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.128270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.128344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.128530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.128586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.128784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.128841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.129091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.129148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.129360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.129418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.129631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.129700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.129913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.129965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.130127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.130159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.130334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.130392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.130570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.130627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.130831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.130864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.131023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.131056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.619 qpair failed and we were unable to recover it. 00:29:52.619 [2024-11-20 06:40:24.131290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.619 [2024-11-20 06:40:24.131360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.131549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.131606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.131856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.131912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.132101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.132157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.132345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.132402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.132570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.132626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.132852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.132908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.133083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.133139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.133335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.133392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.133616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.133672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.133872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.133927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.134119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.134178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.134368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.134426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.134630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.134687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.134881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.134914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.135075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.135108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.135295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.135366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.135551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.135609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.135801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.135860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.136049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.136106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.136316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.136377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.136560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.136593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.136754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.136787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.136896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.136928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.137088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.137121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.137300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.137373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.137557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.137614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.137803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.137860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.138079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.138137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.138362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.138422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.138643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.138676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.138773] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:29:52.620 [2024-11-20 06:40:24.138841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.138861] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:52.620 [2024-11-20 06:40:24.138874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.139037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.139108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.139371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.139427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.620 qpair failed and we were unable to recover it. 00:29:52.620 [2024-11-20 06:40:24.139648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.620 [2024-11-20 06:40:24.139703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.139915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.139953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.140062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.140094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.140255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.140288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.140403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.140438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.140601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.140634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.140806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.140870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.141074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.141131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.141335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.141413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.141709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.141771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.142014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.142047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.142158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.142191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.142330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.142363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.142494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.142526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.142667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.142700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.142838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.142870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.142969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.143001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.143111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.143144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.143284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.143323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.143427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.143460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.143614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.143676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.143908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.143964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.144152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.144186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.144296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.144334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.144469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.144502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.144725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.144785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.145026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.145082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.145278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.145392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.145652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.145714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.145949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.146007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.146181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.146238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.146432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.146511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.146770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.146831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.147066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.147123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.621 [2024-11-20 06:40:24.147383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.621 [2024-11-20 06:40:24.147416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.621 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.147715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.147775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.148018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.148075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.148258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.148326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.148567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.148657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.148860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.148910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.149058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.149085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.149174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.149198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.149313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.149379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.149554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.149606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.149809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.149862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.150052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.150104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.150266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.150342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.150427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.150454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.150565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.150633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.150788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.150839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.151023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.151074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.151255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.151323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.151437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.151463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.151552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.151577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.151734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.151785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.151961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.152014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.152173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.152225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.152417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.152444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.152532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.152558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.152751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.152804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.153003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.153057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.153287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.153369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.153451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.153477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.153577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.153627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.153847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.153873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.153970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.153997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.154093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.154144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.154351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.154378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.154464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.154490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.154593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.154618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.154704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.154729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.154861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.622 [2024-11-20 06:40:24.154912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.622 qpair failed and we were unable to recover it. 00:29:52.622 [2024-11-20 06:40:24.155080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.155131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.155347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.155402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.155603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.155657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.155914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.155967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.156227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.156253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.156353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.156380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.156466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.156522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.156722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.156773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.156976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.157027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.157220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.157270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.157460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.157512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.157683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.157736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.157907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.157960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.158196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.158249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.158487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.158540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.158750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.158803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.158975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.159038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.159197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.159251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.159434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.159488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.159666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.159719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.159899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.159969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.160137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.160213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.160453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.160514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.160748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.160809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.161031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.161092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.161357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.161415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.161586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.161639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.161847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.161900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.162100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.162155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.162353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.162407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.162567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.162619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.162822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.162876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.163049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.163102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.163293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.163358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.163555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.163608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.163780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.163833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.164034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.164087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.623 [2024-11-20 06:40:24.164288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.623 [2024-11-20 06:40:24.164351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.623 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.164557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.164611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.164768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.164819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.165017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.165068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.165277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.165346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.165516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.165566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.165735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.165805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.165974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.166031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.166209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.166267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.166568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.166644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.166860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.166924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.167197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.167259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.167473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.167552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.167801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.167858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.168084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.168137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.168356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.168409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.168609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.168661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.168820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.168875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.169043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.169095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.169288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.169352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.169540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.169593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.169776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.169830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.170061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.170114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.170354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.170411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.170607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.170659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.170863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.170915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.171117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.171170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.171381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.171435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.171606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.171662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.171845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.171899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.172080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.172133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.172367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.172421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.172579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.172632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.172868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.172921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.173156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.173209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.173424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.173481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.173681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.173738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.173950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.624 [2024-11-20 06:40:24.174006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.624 qpair failed and we were unable to recover it. 00:29:52.624 [2024-11-20 06:40:24.174198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.174253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.174483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.174540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.174722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.174779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.175007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.175063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.175282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.175353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.175541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.175598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.175809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.175866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.176083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.176140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.176331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.176391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.176598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.176656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.176869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.176926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.177132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.177197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.177427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.177487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.177662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.177719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.177916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.177972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.178161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.178220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.178468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.178496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.178611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.178638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.178731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.178758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.178847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.178874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.178983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.179010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.179106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.179133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.179260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.179287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.179388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.179415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.179503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.179529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.179651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.179678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.179764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.179790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.179905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.179931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.180030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.180058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.180203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.180229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.180340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.180368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.180461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.180487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.180569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.180596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.625 qpair failed and we were unable to recover it. 00:29:52.625 [2024-11-20 06:40:24.180715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.625 [2024-11-20 06:40:24.180741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.180836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.180862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.180976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.181003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.181090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.181117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.181238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.181265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.181409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.181450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.181545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.181573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.181691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.181717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.181832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.181863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.181968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.181996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.182117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.182143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.182253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.182280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.182435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.182462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.182548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.182574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.182679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.182705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.182797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.182824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.182932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.182959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.183090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.183131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.183249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.183277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.183404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.183432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.183553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.183579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.183666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.183692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.183787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.183813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.183903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.183929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.184012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.184039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.184135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.184164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.184247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.184276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.184385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.184413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.184523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.184549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.184649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.184676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.184803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.184831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.184945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.184971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.185091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.185118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.185214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.185239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.185358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.185386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.185500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.185526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.185621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.185646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.185732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.185758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.626 qpair failed and we were unable to recover it. 00:29:52.626 [2024-11-20 06:40:24.185871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.626 [2024-11-20 06:40:24.185898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.186010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.186036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.186153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.186179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.186321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.186350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.186439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.186465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.186557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.186583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.186675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.186702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.186815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.186848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.186966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.186994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.187111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.187138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.187263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.187290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.187385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.187411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.187504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.187529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.187649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.187675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.187771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.187798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.187886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.187912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.188013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.188039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.188153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.188179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.188298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.188335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.188424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.188453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.188564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.188591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.188683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.188709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.188801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.188828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.188942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.188969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.189081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.189107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.189189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.189215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.189296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.189339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.189463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.189492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.189580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.189606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.189718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.189745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.189856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.189882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.189999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.190026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.190147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.190173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.190262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.190292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.190395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.190427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.190523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.190549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.190646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.190672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.190752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.627 [2024-11-20 06:40:24.190785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.627 qpair failed and we were unable to recover it. 00:29:52.627 [2024-11-20 06:40:24.190901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.190928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.191019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.191046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.191156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.191182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.191294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.191327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.191415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.191441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.191527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.191554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.191670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.191697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.191786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.191813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.191929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.191955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.192072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.192098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.192223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.192252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.192360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.192387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.192476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.192502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.192618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.192644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.192736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.192762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.192882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.192908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.192998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.193024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.193131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.193161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.193249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.193275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.193364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.193391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.193473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.193499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.193581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.193610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.193766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.193793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.193888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.193919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.194014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.194040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.194157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.194184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.194316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.194344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.194439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.194465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.194579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.194605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.194727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.194754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.194872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.194898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.194990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.195015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.195130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.195163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.195263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.195291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.195429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.195455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.195568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.195607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.195705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.628 [2024-11-20 06:40:24.195732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.628 qpair failed and we were unable to recover it. 00:29:52.628 [2024-11-20 06:40:24.195825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.195851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.195943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.195969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.196061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.196088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.196209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.196236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.196357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.196384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.196475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.196502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.196588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.196614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.196723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.196748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.196871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.196901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.197008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.197035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.197115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.197142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.197221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.197248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.197345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.197371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.197502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.197534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.197640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.197667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.197784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.197810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.197925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.197959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.198058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.198085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.198170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.198196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.198275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.198307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.198409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.198443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.198530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.198556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.198690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.198716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.198813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.198840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.198961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.198988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.199122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.199148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.199244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.199270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.199374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.199402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.199515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.199541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.199629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.199656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.199738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.199764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.199848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.199877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.199996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.200022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.200133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.200173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.200292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.200329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.200427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.200455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.200568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.200594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.200688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.629 [2024-11-20 06:40:24.200714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.629 qpair failed and we were unable to recover it. 00:29:52.629 [2024-11-20 06:40:24.200801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.200827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.200924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.200953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.201043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.201073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.201186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.201213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.201298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.201338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.201425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.201451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.201536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.201562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.201689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.201715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.201803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.201836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.201950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.201977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.202070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.202099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.202197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.202223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.202346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.202374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.202462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.202489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.202576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.202605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.202687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.202713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.202829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.202856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.202961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.202987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.203072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.203099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.203189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.203218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.203324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.203351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.203469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.203496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.203582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.203610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.203718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.203747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.203857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.203884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.203970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.203998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.204112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.204139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.204231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.204259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.204354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.204381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.204466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.204498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.204585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.204611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.204718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.630 [2024-11-20 06:40:24.204745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.630 qpair failed and we were unable to recover it. 00:29:52.630 [2024-11-20 06:40:24.204832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.204858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.204966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.204993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.205084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.205113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.205206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.205237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.205318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.205345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.205436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.205462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.205573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.205599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.205686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.205719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.205810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.205836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.205948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.205974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.206064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.206090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.206188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.206216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.206316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.206342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.206437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.206463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.206580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.206606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.206737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.206764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.206855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.206882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.206969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.206995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.207072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.207098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.207174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.207206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.207336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.207366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.207453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.207479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.207566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.207592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.207698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.207724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.207833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.207864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.207949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.207976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.208092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.208119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.208233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.208259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.208369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.208395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.208484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.208510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.208621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.208648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.208765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.208791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.208880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.208906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.208982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.209008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.209099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.209126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.209236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.209262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.209358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.209385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.631 qpair failed and we were unable to recover it. 00:29:52.631 [2024-11-20 06:40:24.209470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.631 [2024-11-20 06:40:24.209496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.209591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.209617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.209705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.209731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.209828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.209855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.209940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.209966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.210081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.210107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.210217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.210243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.210372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.210407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.210508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.210535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.210651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.210678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.210759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.210785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.210877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.210905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.211020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.211048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.211140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.211167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.211256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.211283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.211393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.211421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.211511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.211538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.211622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.211648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.211757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.211788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.211914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.211942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.212037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.212062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.212176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.212202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.212334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.212361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.212459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.212484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.212599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.212625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.212706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.212732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.212865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.212891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.212978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.213005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.213087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.213113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.213201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.213228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.213322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.213350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.213444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.213470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.213595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.213622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.213743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.213770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.213855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.213882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.213965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.213991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.214131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.214158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.214250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.632 [2024-11-20 06:40:24.214276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.632 qpair failed and we were unable to recover it. 00:29:52.632 [2024-11-20 06:40:24.214370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.214397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.214490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.214517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.214599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.214626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.214718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.214745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.214840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.214867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.215019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.215057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.215165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.215194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.215290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.215326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.215408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.215434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.215541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.215567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.215664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.215691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.215799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.215826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.215912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.215940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.216050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.216077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.216164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.216192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.216282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.216319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.216437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.216468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.216553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.216579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.216660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.216687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.216779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.216806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.216882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.216907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.216989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.217015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.217099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.217125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.217237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.217263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.217386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.217412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.217500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.217525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.217606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.217631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.217719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.217745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.217825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.217852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.217961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.217987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.218084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.218111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.218229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.218255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.218342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.218369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.218479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.218506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.218595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.218622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.633 qpair failed and we were unable to recover it. 00:29:52.633 [2024-11-20 06:40:24.218704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.633 [2024-11-20 06:40:24.218731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.218816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.218843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.218914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:52.634 [2024-11-20 06:40:24.218941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.218967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.219075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.219113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.219206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.219234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.219327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.219355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.219472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.219499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.219578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.219604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.219704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.219731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.219839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.219867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.219956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.219983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.220099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.220126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.220233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.220260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.220354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.220380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.220494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.220521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.220636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.220663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.220751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.220777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.220873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.220899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.221020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.221048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.221137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.221163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.221246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.221279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.221381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.221413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.221503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.221529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.221620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.221647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.221741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.221768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.221883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.221910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.222000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.222027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.222144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.222170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.222259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.222285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.222409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.222435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.222533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.222560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.222651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.222678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.222808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.222834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.222947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.222973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.223058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.634 [2024-11-20 06:40:24.223084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.634 qpair failed and we were unable to recover it. 00:29:52.634 [2024-11-20 06:40:24.223207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.223235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.223339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.223378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.223514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.223543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.223641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.223675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.223774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.223801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.223906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.223932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.224027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.224053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.224138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.224166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.224253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.224280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.224372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.224399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.224487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.224515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.224607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.224634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.224774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.224801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.224898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.224926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.225033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.225061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.225145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.225171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.225286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.225320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.225406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.225432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.225532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.225559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.225645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.225671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.225779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.225805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.225947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.225979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.226073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.226100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.226214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.226240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.226334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.226360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.226484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.226512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.226615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.226642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.226790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.226816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.226937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.226971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.227094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.227128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.227251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.227276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.227375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.227404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.227506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.227533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.227661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.227687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.227798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.227824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.635 [2024-11-20 06:40:24.227898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.635 [2024-11-20 06:40:24.227928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.635 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.228021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.228048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.228140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.228166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.228245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.228270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.228429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.228458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.228587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.228630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.228732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.228762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.228874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.228902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.228992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.229018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.229101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.229129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.229247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.229275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.229406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.229435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.229550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.229601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.229701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.229737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.229831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.229858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.229973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.230000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.230102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.230129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.230243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.230268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.230372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.230405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.230494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.230521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.230637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.230663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.230746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.230772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.230872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.230901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.231003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.231030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.231139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.231165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.231252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.231278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.231378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.231407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.231502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.231534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.231637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.231665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.231776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.231802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.231917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.231944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.232040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.232068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.232158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.232186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.232317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.232346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.232437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.232464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.232560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.232588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.232718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.232745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.232839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.232866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.636 qpair failed and we were unable to recover it. 00:29:52.636 [2024-11-20 06:40:24.232961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.636 [2024-11-20 06:40:24.232988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.233102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.233129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.233220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.233246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.233380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.233409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.233491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.233521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.233668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.233694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.233806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.233832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.233939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.233970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.234070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.234097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.234175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.234204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.234324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.234351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.234443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.234470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.234562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.234595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.234690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.234718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.234808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.234834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.234921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.234948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.235055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.235095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.235201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.235230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.235336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.235365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.235482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.235511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.235604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.235631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.235757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.235785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.235875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.235904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.236019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.236046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.236137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.236164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.236251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.236277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.236421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.236449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.236543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.236569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.236652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.236679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.236773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.236800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.236917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.236947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.237052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.237080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.237161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.237189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.237313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.237341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.237431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.237464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.237581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.237608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.237699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.237726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.237846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.637 [2024-11-20 06:40:24.237873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.637 qpair failed and we were unable to recover it. 00:29:52.637 [2024-11-20 06:40:24.238012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.238040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.238178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.238207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.238312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.238340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.238429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.238456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.238539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.238566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.238688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.238714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.238799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.238826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.238931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.238958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.239070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.239097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.239191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.239224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.239316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.239345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.239478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.239505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.239590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.239618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.239708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.239734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.239811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.239837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.239953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.239980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.240105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.240133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.240223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.240251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.240387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.240415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.240500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.240527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.240647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.240674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.240794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.240821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.240913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.240939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.241029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.241056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.241173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.241199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.241321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.241350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.241460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.241488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.241601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.241629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.241741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.241768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.241879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.241905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.241996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.242024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.242110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.242137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.242222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.242250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.242398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.242425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.242522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.242550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.242645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.242671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.638 qpair failed and we were unable to recover it. 00:29:52.638 [2024-11-20 06:40:24.242759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.638 [2024-11-20 06:40:24.242796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.242884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.242912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.243029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.243056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.243148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.243176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.243260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.243288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.243390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.243417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.243509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.243536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.243628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.243654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.243751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.243791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.243912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.243940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.244027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.244054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.244149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.244175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.244289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.244330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.244422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.244448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.244542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.244568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.244687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.244714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.244829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.244857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.244939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.244969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.245075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.245101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.245189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.245216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.245315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.245342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.245451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.245478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.245566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.245592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.245672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.245698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.245815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.245841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.245927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.245955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.246042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.246070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.246156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.246188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.246272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.246309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.246404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.246431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.246545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.246571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.246698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.246726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.246806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.246834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.639 [2024-11-20 06:40:24.246950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.639 [2024-11-20 06:40:24.246976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.639 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.247068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.247095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.247206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.247234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.247339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.247368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.247448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.247475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.247548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.247574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.247674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.247700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.247834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.247860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.247960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.247988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.248084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.248110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.248200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.248226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.248324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.248351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.248470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.248497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.248625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.248653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.248734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.248761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.248857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.248884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.248989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.249016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.249098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.249125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.249220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.249247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.249368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.249407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.249528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.249557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.249659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.249686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.249800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.249826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.249940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.249968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.250081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.250108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.250189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.250215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.250306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.250334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.250428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.250454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.250532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.250557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.250678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.250705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.250796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.250822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.250911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.250936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.251042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.251067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.251149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.251175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.251266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.251298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.251423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.251449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.251542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.251569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.251660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.640 [2024-11-20 06:40:24.251687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.640 qpair failed and we were unable to recover it. 00:29:52.640 [2024-11-20 06:40:24.251797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.251823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.251904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.251931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.252010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.252037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.252120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.252146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.252259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.252285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.252407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.252447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.252581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.252611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.252705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.252731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.252821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.252847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.252977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.253008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.253133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.253160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.253251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.253279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.253406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.253434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.253526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.253553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.253667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.253694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.253806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.253832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.253945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.253972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.254057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.254085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.254172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.254199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.254297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.254333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.254442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.254476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.254588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.254614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.254736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.254762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.254870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.254897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.254979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.255005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.255093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.255120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.255212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.255239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.255330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.255357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.255452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.255479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.255590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.255620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.255698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.255724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.255811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.255838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.255969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.256002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.256119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.256159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.256250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.256278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.256400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.256428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.256546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.256577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.641 [2024-11-20 06:40:24.256697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.641 [2024-11-20 06:40:24.256724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.641 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.256806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.256833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.256942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.256968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.257082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.257108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.257191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.257218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.257324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.257352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.257432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.257459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.257541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.257568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.257693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.257733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.257828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.257857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.257949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.257978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.258102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.258129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.258211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.258237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.258322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.258350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.258438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.258467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.258561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.258588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.258674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.258700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.258811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.258838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.258932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.258960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.259058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.259097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.259217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.259245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.259362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.259394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.259496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.259524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.259625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.259652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.259738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.259764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.259842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.259868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.259958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.260002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.260123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.260150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.260258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.260285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.260396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.260422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.260507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.260534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.260626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.260652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.260736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.260763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.260884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.260912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.260998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.261026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.261142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.261171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.261285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.261328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.642 [2024-11-20 06:40:24.261429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.642 [2024-11-20 06:40:24.261458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.642 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.261550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.261577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.261700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.261727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.261816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.261843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.261952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.261987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.262100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.262127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.262223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.262249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.262362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.262388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.262475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.262502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.262589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.262616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.262723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.262749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.262834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.262861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.262939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.262965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.263051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.263078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.263163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.263190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.263319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.263349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.263438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.263466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.263586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.263613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.263727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.263753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.263832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.263859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.263952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.263979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.264065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.264091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.264179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.264205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.264283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.264325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.264412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.264438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.264528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.264554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.264649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.264676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.264811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.264837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.264914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.264940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.265026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.265058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.265140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.265166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.265278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.265311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.265403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.265430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.265518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.265545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.265664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.265690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.265781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.643 [2024-11-20 06:40:24.265810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.643 qpair failed and we were unable to recover it. 00:29:52.643 [2024-11-20 06:40:24.265921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.265947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.266026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.266052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.266141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.266168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.266249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.266278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.266374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.266401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.266490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.266517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.266623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.266649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.266733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.266759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.266850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.266877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.267011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.267037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.267120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.267147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.267264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.267290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.267383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.267410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.267493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.267519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.267617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.267644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.267741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.267767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.267849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.267876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.267967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.267994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.268088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.268117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.268226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.268256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.268368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.268395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.268475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.268501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.268578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.268612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.268697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.268726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.268818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.268845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.268928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.268956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.269042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.269069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.269187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.269214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.269318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.644 [2024-11-20 06:40:24.269345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.644 qpair failed and we were unable to recover it. 00:29:52.644 [2024-11-20 06:40:24.269435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.269462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.269552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.269580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.269710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.269738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.269833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.269860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.269974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.270000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.270092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.270118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.270217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.270243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.270335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.270362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.270448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.270474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.270558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.270584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.270689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.270716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.270806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.270834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.270953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.270980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.271060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.271086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.271174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.271201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.271294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.271328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.271416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.271443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.271524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.271549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.271678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.271706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.271787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.271814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.271927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.271953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.272062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.272089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.272205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.272231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.272351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.272378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.272463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.272490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.272611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.272637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.272710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.272737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.272832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.272858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.272967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.273005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.273127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.273155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.273255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.273281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.273393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.273424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.273499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.273525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.273656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.273683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.273770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.273796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.273881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.273908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.274002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.274028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.645 qpair failed and we were unable to recover it. 00:29:52.645 [2024-11-20 06:40:24.274142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.645 [2024-11-20 06:40:24.274168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.274272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.274299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.274397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.274423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.274499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.274525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.274618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.274644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.274732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.274758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.274837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.274863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.274941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.274967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.275063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.275089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.275170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.275196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.275313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.275340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.275433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.275461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.275553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.275582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.275719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.275746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.275835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.275862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.275945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.275972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.276055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.276081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.276178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.276204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.276288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.276320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.276412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.276437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.276531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.276558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.276655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.276681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.276765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.276791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.276869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.276895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.277010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.277037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.277149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.277175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.277292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.277324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.277406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.277433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.277527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.277553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.277655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.277682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.277775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.277804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.277900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.277939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.278039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.278072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.278172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.646 [2024-11-20 06:40:24.278199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.646 qpair failed and we were unable to recover it. 00:29:52.646 [2024-11-20 06:40:24.278277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.278328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.278441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.278468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.278560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.278588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.278706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.278733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.278817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.278844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.278930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.278957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.279045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.279072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.279167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.279194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.279308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.279335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.279428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.279455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.279538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.279565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.279659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.279686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.279771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.279798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.279886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.279914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.280010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.280037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.280118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.280147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.280245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.280272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.280369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.280399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.280496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.280524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.280627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.280654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.280735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.280762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.280849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.280877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.280965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.280993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.281091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.281119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.281206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.281233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.281329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.281356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.281473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.281475] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:52.647 [2024-11-20 06:40:24.281501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 [2024-11-20 06:40:24.281508] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.281523] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:52.647 [2024-11-20 06:40:24.281536] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:52.647 [2024-11-20 06:40:24.281547] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:52.647 [2024-11-20 06:40:24.281597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.281623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.281735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.281761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.281838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.281863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.281938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.281965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.282061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.282089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.282199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.282228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.647 [2024-11-20 06:40:24.282353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.647 [2024-11-20 06:40:24.282381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.647 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.282487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.282516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.282614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.282640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.282757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.282783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.282866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.282892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.283013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.283045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.283169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.283197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.283177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:29:52.648 [2024-11-20 06:40:24.283288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.283208] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:29:52.648 [2024-11-20 06:40:24.283331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.283431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.283458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.283556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.283582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.283678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.283704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.283791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.283823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.283235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:29:52.648 [2024-11-20 06:40:24.283239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:52.648 [2024-11-20 06:40:24.283922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.283948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.284074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.284100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.284183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.284210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.284339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.284366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.284449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.284475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.284564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.284590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.284690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.284717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.284818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.284846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.284964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.284992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.285084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.285110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.285199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.285227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.285331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.285359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.285466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.285494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.285627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.285653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.285740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.285766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.285896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.285922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.286009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.286036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.286127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.286154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.286247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.286274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.286379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.286412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.648 [2024-11-20 06:40:24.286502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.648 [2024-11-20 06:40:24.286530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.648 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.286625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.286652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.286736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.286764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.286853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.286880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.286969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.286996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.287075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.287103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.287186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.287214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.287325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.287353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.287432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.287459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.287550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.287577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.287663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.287690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.287774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.287803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.287888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.287914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.288014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.288042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.288118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.288144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.288223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.288248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.288346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.288373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.288451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.288477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.288733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.288760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.288848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.288875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.288953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.288979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.289070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.289096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.289198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.289225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.289315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.289344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.289430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.289456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.289547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.289574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.289718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.289746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.289849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.289876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.289966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.289993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.290118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.649 [2024-11-20 06:40:24.290145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.649 qpair failed and we were unable to recover it. 00:29:52.649 [2024-11-20 06:40:24.290221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.290247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.290336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.290363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.290450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.290476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.290588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.290616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.290737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.290764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.290846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.290872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.290977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.291003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.291088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.291116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.291207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.291236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.291331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.291359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.291461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.291488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.291569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.291596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.291677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.291704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.291785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.291812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.291903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.291931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.292019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.292045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.292138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.292171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.292267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.292293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.292392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.292418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.292547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.292573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.292655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.292682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.292802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.292828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.292916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.292944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.293028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.293056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.293156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.293195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.293291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.293327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.293414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.293441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.293559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.293585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.293693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.293730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.293839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.293875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.294001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.294029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.294124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.294151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.294271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.294299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.294409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.294436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.294529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.294556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.294641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.294667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.294762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.650 [2024-11-20 06:40:24.294793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.650 qpair failed and we were unable to recover it. 00:29:52.650 [2024-11-20 06:40:24.294905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.294932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.295017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.295044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.295126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.295152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.295249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.295276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.295393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.295432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.295565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.295593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.295684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.295716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.295802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.295829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.295916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.295942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.296024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.296051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.296161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.296188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.296313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.296340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.296423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.296449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.296541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.296569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.296648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.296674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.296757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.296783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.296879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.296907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.296992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.297018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.297105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.297138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.297229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.297255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.297354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.297381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.297468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.297495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.297587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.297615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.297706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.297732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.297819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.297846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.297936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.297964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.298050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.298081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.298200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.298227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.298324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.298352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.298433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.298461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.298557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.298584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.298663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.298690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.298781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.298807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.298896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.298922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.299018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.299046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.299176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.299203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.299295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.299331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.651 [2024-11-20 06:40:24.299422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.651 [2024-11-20 06:40:24.299448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.651 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.299525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.299551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.299647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.299679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.299771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.299798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.299887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.299913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.300040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.300066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.300162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.300191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.300285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.300321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.300412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.300439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.300526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.300553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.300642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.300669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.300754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.300781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.300897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.300923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.301041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.301068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.301149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.301176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.301267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.301295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.301391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.301423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.301508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.301540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.301636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.301662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.301741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.301767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.301850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.301876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.301968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.301996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.302094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.302121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.302245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.302273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.302368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.302395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.302490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.302516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.302614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.302641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.302726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.302753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.302836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.302863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.302970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.302997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.303086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.303113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.303199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.303225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.303318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.303345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.303430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.303457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.303548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.303574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.303661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.303697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.303781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.303807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.303903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.652 [2024-11-20 06:40:24.303929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.652 qpair failed and we were unable to recover it. 00:29:52.652 [2024-11-20 06:40:24.304048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.304074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.304161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.304190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.304315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.304343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.304438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.304468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.304582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.304609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.304710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.304736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.304825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.304851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.304939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.304973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.305066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.305092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.305213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.305239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.305342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.305370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.305465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.305492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.305582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.305615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.305699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.305727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.305807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.305833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.305929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.305961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.306048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.306077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.306158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.306185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.306272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.306310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.306405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.306432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.306513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.306539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.306626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.306652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.306735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.306760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.306881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.306907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.307014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.307040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.307152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.307178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.307263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.307292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.307397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.307423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.307513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.307538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.307634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.307660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.307745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.307771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.307893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.307922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.308019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.308045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.308143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.308170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.308249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.308275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.653 [2024-11-20 06:40:24.308400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.653 [2024-11-20 06:40:24.308428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.653 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.308546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.308573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.308657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.308684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.308761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.308786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.308867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.308894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.309004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.309030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.309111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.309136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.309216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.309242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.309338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.309365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.309455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.309481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.309564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.309595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.309685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.309712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.309790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.309816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.309901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.309927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.310007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.310033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.310113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.310139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.310228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.310255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.310352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.310379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.310498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.310525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.310620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.310647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.310760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.310786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.310870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.310896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.310974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.311000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.311139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.311165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.311252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.311278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.311380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.311406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.311490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.311516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.311589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.311620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.311711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.311738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.311820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.311846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.311933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.311959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.312042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.312069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.312215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.654 [2024-11-20 06:40:24.312241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.654 qpair failed and we were unable to recover it. 00:29:52.654 [2024-11-20 06:40:24.312331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.312358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.312448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.312477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.312554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.312579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.312667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.312693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.312782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.312810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.312908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.312934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.313048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.313074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.313157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.313184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.313269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.313296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.313383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.313408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.313491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.313518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.313614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.313640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.313751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.313777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.313874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.313901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.313999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.314030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.314109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.314135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.314229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.314260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.314392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.314421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.314548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.314608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.314742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.314781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.314888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.314917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.315029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.315055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.315154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.315181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.315257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.315291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.315400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.315427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.315557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.315593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.315678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.315706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.315793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.315821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.315915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.315942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.316032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.316058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.316141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.316168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.316260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.316285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.316409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.316436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.316519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.316546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.316647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.316684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.316768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.316794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.316872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.316898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.655 [2024-11-20 06:40:24.316984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.655 [2024-11-20 06:40:24.317010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.655 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.317121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.317147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.317228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.317255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.317371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.317399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.317493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.317518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.317610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.317636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.317717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.317743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.317836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.317862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.317949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.317979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.318099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.318140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.318234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.318268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.318367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.318396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.318481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.318515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.318635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.318667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.318752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.318780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.318892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.318919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.318999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.319026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.319111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.319138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.319258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.319284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.319381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.319409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.319490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.319516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.319647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.319687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.319778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.319811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.319888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.319915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.320009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.320036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.320122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.320149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.320246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.320274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.320379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.320413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.320511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.320538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.320662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.320690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.320786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.320812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.320922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.320950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.321052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.321078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.321183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.321209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.321320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.321348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.321458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.321487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.321566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.321593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.321710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.656 [2024-11-20 06:40:24.321737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.656 qpair failed and we were unable to recover it. 00:29:52.656 [2024-11-20 06:40:24.321827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.321860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.321953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.321982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.322069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.322097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.322193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.322232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.322342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.322380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.322484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.322517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.322640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.322668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.322754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.322781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.322892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.322920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.323005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.323032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.323140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.323181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.323276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.323312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.323405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.323433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.323516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.323543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.323633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.323660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.323744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.323770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.323879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.323905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.323999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.324025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.324143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.324169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.324280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.324320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.324433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.324461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.324548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.324577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.324670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.324698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.324797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.324831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.324951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.324978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.325055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.325083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.325190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.325229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.325338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.325366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.325447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.325476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.325571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.325607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.325687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.325715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.325798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.325824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.325918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.325946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.326033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.326061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.326149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.326177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.326266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.326306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.326392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.326419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.326504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.326531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.326652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.326689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.657 [2024-11-20 06:40:24.326783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.657 [2024-11-20 06:40:24.326817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.657 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.326896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.326922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.327037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.327063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.327148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.327174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.327255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.327281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.327378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.327406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.327497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.327523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.327663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.327692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.327773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.327800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.327879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.327905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.327983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.328009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.328122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.328155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.328251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.328295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.328398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.328427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.328510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.328537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.328646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.328672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.328751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.328778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.328863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.328889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.328998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.329024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.329101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.329128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.329256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.329283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.329385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.329412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.329494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.329520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.329613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.329640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.329732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.329760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.329860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.329887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.329986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.330014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.330105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.330131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.330216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.330242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.330337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.330364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.330452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.330478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.330569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.330595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.330693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.330720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.330814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.330854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.330971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.330998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.658 [2024-11-20 06:40:24.331080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.658 [2024-11-20 06:40:24.331107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.658 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.331198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.331224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.331317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.331345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.331438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.331466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.331553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.331582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.331669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.331696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.331789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.331816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.331916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.331944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.332055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.332082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.332168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.332196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.332275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.332301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.332391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.332418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.332498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.332524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.332625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.332651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.332747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.332773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.332884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.332909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.332990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.333020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.333100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.333126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.333206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.333233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.333322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.333350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.333435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.333461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.333546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.333575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.333675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.333702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.333792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.333819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.333907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.333934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.334012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.334038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.334194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.334234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.334326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.334354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.334437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.334465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.334548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.334576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.334685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.334712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.334797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.334823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.334930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.334957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.335037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.335064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.335149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.335176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.335275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.335311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.335396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.335422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.335534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.335562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.335658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.335684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.335767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.335793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.335872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.659 [2024-11-20 06:40:24.335898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.659 qpair failed and we were unable to recover it. 00:29:52.659 [2024-11-20 06:40:24.335985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.336012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.336098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.336124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.336218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.336245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.336339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.336366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.336474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.336501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.336587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.336616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.336704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.336730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.336811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.336837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.336916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.336945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.337024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.337053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.337195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.337234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.337332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.337360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.337481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.337510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.337595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.337630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.337745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.337772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.337860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.337891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.337974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.338004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.338088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.338115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.338199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.338226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.338340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.338367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.338452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.338479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.338589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.338626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.338719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.338746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.338853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.338879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.338964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.338991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.339074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.339101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.339196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.339235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.339374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.339403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.339494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.339521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.339606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.339632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.339717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.339744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.339886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.339912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.339998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.340025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.340142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.340182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.340275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.340309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.340401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.340428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.340530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.340564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.340689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.340718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.340806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.340833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.340922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.340948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.660 [2024-11-20 06:40:24.341030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.660 [2024-11-20 06:40:24.341060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.660 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.341143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.341171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.341262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.341290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.341388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.341416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.341496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.341522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.341609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.341634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.341711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.341737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.341824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.341850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.341939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.341967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.342053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.342082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.342181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.342220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.342363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.342391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.342520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.342546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.342636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.342662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.342746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.342774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.342867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.342900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.342989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.343017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.343110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.343136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.343220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.343247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.343344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.343370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.343457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.343483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.343593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.343629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.343713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.343739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.343825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.343852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.343934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.343960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.344087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.344115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.344207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.344233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.344358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.344387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.344469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.344496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.344586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.344613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.344708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.344735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.344818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.344845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.344929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.344956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.345036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.345064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.345148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.345174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.345318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.345346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.345428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.345455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.345543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.345570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.661 [2024-11-20 06:40:24.345657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.661 [2024-11-20 06:40:24.345683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.661 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.345801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.345827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.345915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.345943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.346024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.346050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.346151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.346178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.346265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.346291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.346396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.346435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.346524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.346551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.346677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.346704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.346792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.346820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.346905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.346933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.347016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.347043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.347126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.347154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.347246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.347272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.347384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.347413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.347501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.347528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.347622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.347649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.347735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.347763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.347848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.347874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.347952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.347980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.348065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.348093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.348179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.348206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.348286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.348349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.348435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.348461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.348543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.348569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.348661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.348688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.348775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.348802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.348879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.348905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.348995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.349022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.349101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.349127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.349212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.349241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.349362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.349391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.349479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.349508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.349597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.349626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.349714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.349740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.349819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.349846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.349926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.349953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.350043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.350069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.350144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.350171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.350249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.662 [2024-11-20 06:40:24.350275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.662 qpair failed and we were unable to recover it. 00:29:52.662 [2024-11-20 06:40:24.350401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.350428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.350519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.350545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.350641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.350670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.350749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.350775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.350855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.350887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.350964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.350990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.351096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.351136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.351240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.351268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.351373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.351403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.351491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.351518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.351618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.351644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.351727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.351755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.351836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.351863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.351950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.351976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.352057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.352083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.352201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.352227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.352325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.352352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.352429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.352455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.352539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.352566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.352646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.352672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.352783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.352811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.352889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.352917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.353010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.353037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.353111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.663 [2024-11-20 06:40:24.353137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.663 qpair failed and we were unable to recover it. 00:29:52.663 [2024-11-20 06:40:24.353245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.353271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.353356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.353382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.353495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.353522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.353614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.353641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.353758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.353784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.353866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.353892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.353978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.354005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.354105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.354145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.354228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.354256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.354373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.354402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.354495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.354522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.354643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.354679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.354759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.354785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.354902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.354929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.355039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.355065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.355172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.355198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.355280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.355311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.355400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.355426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.355509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.355535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.355644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.355670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.355754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.355785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.355877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.355905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.355994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.356020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.664 [2024-11-20 06:40:24.356132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.664 [2024-11-20 06:40:24.356159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.664 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.356248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.356274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.356374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.356401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.356485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.356511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.356627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.356658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.356735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.356761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.356852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.356880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.356961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.356988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.357088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.357128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.357228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.357267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.357366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.357394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.357483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.357510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.357592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.357618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.357706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.357732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.357848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.357874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.357956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.357983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.358064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.358093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.358193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.358221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.358332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.358362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.358458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.358485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.358605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.358631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.358715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.358741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.358822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.358849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.358931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.358958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.359047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.359075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.665 [2024-11-20 06:40:24.359166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.665 [2024-11-20 06:40:24.359192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.665 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.359287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.359335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.359434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.359461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.359561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.359588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.359684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.359711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.359792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.359818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.359928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.359954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.360033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.360059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.360139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.360164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.360271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.360296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.360401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.360427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.360507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.360533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.360651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.360677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.360766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.360792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.360875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.360901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.360987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.361012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.361089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.361115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.361223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.361255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.361365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.361396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.361494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.361522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.361610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.361640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.361725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.361751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.361838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.361864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.361948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.361975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.362057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.362084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.362167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.362195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.362318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.362347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.362427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.362454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.362543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.362569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.666 [2024-11-20 06:40:24.362658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.666 [2024-11-20 06:40:24.362685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.666 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.362763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.362792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.362872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.362899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.363010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.363038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.363117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.363144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.363228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.363254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.363350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.363377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.363471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.363498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.363581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.363607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.363696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.363722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.363804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.363841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.363930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.363956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.364067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.364093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.364172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.364198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.364281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.364317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.364410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.364437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.364548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.364575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.364661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.364688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.364799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.364826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.364923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.364952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.365041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.365068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.365156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.365183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.365275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.365301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.365398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.365424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.365520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.365559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.365675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.365703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.365784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.365810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.365893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.365919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.366111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.366137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.366240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.366266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.366357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.366384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.366467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.366493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.366584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.667 [2024-11-20 06:40:24.366616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.667 qpair failed and we were unable to recover it. 00:29:52.667 [2024-11-20 06:40:24.366702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.366729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.366808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.366837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.366930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.366958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.367044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.367071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.367155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.367182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.367307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.367353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.367447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.367475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.367558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.367584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.367681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.367707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.367794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.367819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.367913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.367939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.368053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.368079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.368157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.368183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.368272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.368300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.368440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.368467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.368556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.368586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.368680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.368708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.368794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.368821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.368915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.368943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.369023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.369049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.369130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.369158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.369253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.369280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.369380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.369407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.369484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.369511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.369631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.369657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.369767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.369794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.369871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.369898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.369980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.370007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.370112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.370152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.370268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.370296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.668 [2024-11-20 06:40:24.370418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.668 [2024-11-20 06:40:24.370445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.668 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.370532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.370559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.370668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.370694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.370780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.370808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.370920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.370947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.371043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.371083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.371176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.371204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.371294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.371330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.371428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.371456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.371550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.371577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.371664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.371690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.371798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.371824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.371912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.371938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.372022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.372050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.372131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.372161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.372245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.372272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.372388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.372415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.372502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.372528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.372607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.372632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.372724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.372751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.372839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.372865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.372975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.373002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.373088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.373116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.373207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.373246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.373387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.373418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.373538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.373565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.669 [2024-11-20 06:40:24.373655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.669 [2024-11-20 06:40:24.373682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.669 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.373769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.373796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.373900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.373927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.374014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.374043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.374133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.374160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.374246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.374273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.374377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.374404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.374492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.374520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.374596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.374622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.374699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.374725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.374835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.374861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.374940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.374966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.375050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.375077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.375171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.375211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.375311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.375341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.375427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.375454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.375534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.375560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.375655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.375683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.375765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.375792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.375873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.375899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.375997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.376036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.376128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.376159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.376238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.376265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.376391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.376418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.376508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.376534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.376623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.376649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.376727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.376754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.376865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.376892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.376979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.377009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.377091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.377117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.377198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.377224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.377312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.377350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.670 qpair failed and we were unable to recover it. 00:29:52.670 [2024-11-20 06:40:24.377442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.670 [2024-11-20 06:40:24.377470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.377561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.377609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.377699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.377727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.377809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.377836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.377951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.377976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.378056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.378083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.378183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.378222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.378341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.378371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.378460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.378486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.378572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.378598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.378690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.378717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.378800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.378828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.378909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.378935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.379023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.379052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.379132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.379158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.379267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.379292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.379396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.379423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.379539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.379565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.379653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.379679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.379757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.379782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.379858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.379883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.379967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.379995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.380083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.380113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.380236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.380270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.380381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.380408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.380504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.380530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.380612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.380638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.380720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.380746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.380824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.671 [2024-11-20 06:40:24.380850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.671 qpair failed and we were unable to recover it. 00:29:52.671 [2024-11-20 06:40:24.380957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.380983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.381068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.381094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.381177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.381205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.381295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.381327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.381409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.381435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.381516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.381542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.381630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.381656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.381744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.381771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.381869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.381896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.381973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.381998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.382101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.382140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.382257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.382285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.382381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.382407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.382491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.382517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.382613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.382640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.382723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.382749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.382839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.382865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.382946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.382972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.383088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.383114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.383198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.383226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.383324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.383356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.383558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.383585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.383679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.383705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.383789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.383815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.383923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.383949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.384062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.384090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.384180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.384207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.384313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.384361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.384457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.384484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.384605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.384633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.384716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.384742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.384823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.384850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.384939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.384966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.672 qpair failed and we were unable to recover it. 00:29:52.672 [2024-11-20 06:40:24.385049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.672 [2024-11-20 06:40:24.385076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.385165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.385197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.385316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.385350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.385433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.385460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.385574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.385600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.385688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.385714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.385804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.385830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.385925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.385952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.386036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.386062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.386143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.386169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.386277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.386309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.386403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.386429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.386534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.386560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.386652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.386678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.386787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.386813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.386911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.386950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.387077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.387117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.387206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.387233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.387316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.387343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.387433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.387458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.387540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.387567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.387688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.387714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.387793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.387818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.387926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.387952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.388030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.388056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.388142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.388167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.388268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.388313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.388401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.388428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.388526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.388564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.388651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.388679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.388769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.388797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.388882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.388908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.388997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.673 [2024-11-20 06:40:24.389024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.673 qpair failed and we were unable to recover it. 00:29:52.673 [2024-11-20 06:40:24.389102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.389127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.389211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.389238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.389331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.389357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.389448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.389474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.389564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.389591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.389676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.389703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.389783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.389810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.389922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.389952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.390033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.390068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.390159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.390185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.390264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.390292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.390402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.390429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.390512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.390540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.390656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.390681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.390786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.390813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.390905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.390933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.391024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.391051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.391143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.391172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.391279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.391312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.391401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.391427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.391531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.391571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.391668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.391696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.391795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.391821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.391905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.391932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.674 [2024-11-20 06:40:24.392018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.674 [2024-11-20 06:40:24.392044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.674 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.392130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.392157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.392244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.392270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.392379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.392406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.392488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.392514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.392590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.392616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.392697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.392723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.392813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.392840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.392926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.392954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.393047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.393073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.393168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.393207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.393300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.393342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.393436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.393463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.393573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.393600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.393683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.393709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.393797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.393824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.393902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.393930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.394030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.394070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.394184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.394212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.394291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.394327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.394420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.394446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.394531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.394557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.394675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.394701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.394779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.394805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.394899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.394926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.395052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.395078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.395156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.395182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.395266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.395292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.395395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.675 [2024-11-20 06:40:24.395420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.675 qpair failed and we were unable to recover it. 00:29:52.675 [2024-11-20 06:40:24.395513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.395540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.395629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.395655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.395738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.395767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.395856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.395882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.395991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.396018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.396104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.396130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.396232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.396272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.396384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.396418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.396507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.396533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.396624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.396652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.396741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.396767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.396880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.396907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.396999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.397024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.397109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.397139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.397219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.397247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.397331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.397369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.397459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.397485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.397564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.397590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.397673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.397699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.397782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.397809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.397898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.397923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.398022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.398062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.398145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.398179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.398270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.398301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.398398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.398426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.398511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.398539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.398630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.398658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.398738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.398766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.398847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.398872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.399004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.399031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.399108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.399134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.399222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.399249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.676 qpair failed and we were unable to recover it. 00:29:52.676 [2024-11-20 06:40:24.399334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.676 [2024-11-20 06:40:24.399360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.399440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.399466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.399549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.399576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.399671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.399697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.399843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.399869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.399950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.399976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.400061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.400087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.400173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.400202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.400288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.400323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.400416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.400442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.400528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.400554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.400631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.400657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.400750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.400777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.400860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.400886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.400965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.400992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.401110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.401136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.401232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.401260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.401376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.401421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.401515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.401544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.401635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.401663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.401746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.401774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.401852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.401879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.401957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.401983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.402077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.402104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.402190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.402217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.402298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.402332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.402442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.402468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.402555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.402581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.402662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.402688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.402824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.402851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.402940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.677 [2024-11-20 06:40:24.402967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.677 qpair failed and we were unable to recover it. 00:29:52.677 [2024-11-20 06:40:24.403059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.403085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.403170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.403198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.403294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.403340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.403428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.403455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.403547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.403575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.403656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.403682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.403794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.403821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.403913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.403941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.404025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.404052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.404131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.404157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.404236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.404261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.404397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.404424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.404505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.404531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.404634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.404663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.404775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.404803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.404883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.404909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.405027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.405053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.405134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.405161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.405251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.405278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.405386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.405413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.405492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.405519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.405598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.405624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.405702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.405729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.405816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.405844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.405929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.405956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.406066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.406093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.406175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.406206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.406308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.406358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.406449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.406476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.406562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.406588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.406668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.406694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.406814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.406843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.406935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.678 [2024-11-20 06:40:24.406963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.678 qpair failed and we were unable to recover it. 00:29:52.678 [2024-11-20 06:40:24.407047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.407073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.407178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.407204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.407284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.407315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.407432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.407458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.407537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.407564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.407654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.407680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.407793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.407820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.407903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.407929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.408038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.408077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.408165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.408193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.408278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.408315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.408404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.408430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.408513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.408540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.408624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.408650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.408728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.408755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.408846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.408876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.408975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.409013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.409140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.409168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.409247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.409274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.409373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.409400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.409480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.409510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.409588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.409613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.409694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.409718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.409808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.409836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.409916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.409943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.410039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.410079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.410167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.410193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.410287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.410321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.410411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.410437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.410524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.410551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.679 [2024-11-20 06:40:24.410637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.679 [2024-11-20 06:40:24.410662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.679 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.410755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.410781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.410881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.410911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.410996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.411023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.411121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.411148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.411227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.411253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.411369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.411409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:29:52.680 [2024-11-20 06:40:24.411497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.411525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@866 -- # return 0 00:29:52.680 [2024-11-20 06:40:24.411616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.411645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.411758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.411784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.411863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.411891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:52.680 [2024-11-20 06:40:24.411970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.411998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.412090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.412118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:29:52.680 [2024-11-20 06:40:24.412214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.412255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.412359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.412387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.680 [2024-11-20 06:40:24.412477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.412506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.412616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.412650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.412725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.412751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.412839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.412865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.412948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.412975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.413055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.413081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.413166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.413194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.680 qpair failed and we were unable to recover it. 00:29:52.680 [2024-11-20 06:40:24.413275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.680 [2024-11-20 06:40:24.413309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.413412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.413437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.413516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.413542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.413627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.413653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.413770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.413803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.413908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.413934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.414018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.414049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.414137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.414164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.414241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.414267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.414370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.414398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.414475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.414501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.414576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.414602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.414683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.414710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.414792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.414822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.414910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.414936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.415016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.415046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.415130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.415157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.415259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.415299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.415410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.415438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.415524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.415551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.415660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.415688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.415768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.415794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.415876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.415902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.415978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.416004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.416100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.416125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.416234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.416259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.416361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.416388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.416469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.416495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.416575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.681 [2024-11-20 06:40:24.416601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.681 qpair failed and we were unable to recover it. 00:29:52.681 [2024-11-20 06:40:24.416686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.682 [2024-11-20 06:40:24.416712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.682 qpair failed and we were unable to recover it. 00:29:52.682 [2024-11-20 06:40:24.416793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.682 [2024-11-20 06:40:24.416820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.682 qpair failed and we were unable to recover it. 00:29:52.682 [2024-11-20 06:40:24.416907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.682 [2024-11-20 06:40:24.416933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.682 qpair failed and we were unable to recover it. 00:29:52.682 [2024-11-20 06:40:24.417012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.682 [2024-11-20 06:40:24.417038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.956 qpair failed and we were unable to recover it. 00:29:52.956 [2024-11-20 06:40:24.417113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.956 [2024-11-20 06:40:24.417144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.417221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.417247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.417332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.417362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.417471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.417511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.417609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.417638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.417725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.417751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.417830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.417856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.417944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.417970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.418048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.418074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.418153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.418178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.418260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.418285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.418381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.418408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.418486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.418513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.418638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.418665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.418759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.418785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.418900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.418926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.419002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.419028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.419111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.419137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.419218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.419244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.419335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.419368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.419449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.419477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.419562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.419588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.419665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.419691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.419773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.419798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.419881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.419908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.419989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.420016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.420104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.420134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.420268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.420300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.420406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.420434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.421308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.421338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.421421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.957 [2024-11-20 06:40:24.421449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.957 qpair failed and we were unable to recover it. 00:29:52.957 [2024-11-20 06:40:24.421537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.421564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.421643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.421670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.421796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.421823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.421911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.421938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.422030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.422070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.422160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.422187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.422310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.422347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.422434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.422461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.422542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.422568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.422675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.422707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.422790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.422816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.422923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.422949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.423022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.423048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.423121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.423146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.423227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.423253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.423354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.423383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.423467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.423494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.423596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.423626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.423736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.423763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.423875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.423901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.424015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.424042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.424127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.424153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.424236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.424263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.424378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.424408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.424494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.424522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.424676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.424715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.424810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.424837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.424918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.424945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.425033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.425063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.425174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.425203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.958 qpair failed and we were unable to recover it. 00:29:52.958 [2024-11-20 06:40:24.425311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.958 [2024-11-20 06:40:24.425349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.425434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.425461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.425551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.425578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.425660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.425686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.425765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.425791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.425876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.425902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.425977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.426008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.426121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.426149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.426227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.426259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.426367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.426396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.426483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.426510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.426598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.426626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.426705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.426732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.426843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.426871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.426951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.426977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.427056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.427082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.427162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.427189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.427268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.427295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.427380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.427406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.427485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.427513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.427605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.427634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.427716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.427742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.427825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.427851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.427961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.427987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.428201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.428240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.428333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.428362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.428445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.428472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.428555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.428582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.428663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.428689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.428767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.428793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.428883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.959 [2024-11-20 06:40:24.428910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.959 qpair failed and we were unable to recover it. 00:29:52.959 [2024-11-20 06:40:24.429002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.429042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.429128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.429157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.429254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.429283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.429392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.429419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.429503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.429530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.429650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.429676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.429761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.429794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.429883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.429909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.430020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.430049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.430133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.430159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.430239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.430265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.430359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.430385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.430465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.430490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.430574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.430600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.430696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.430725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.430806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.430838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.430930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.430963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.431049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.431075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.431163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.431189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.431329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.431366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.431446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.431474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.431556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.431583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.431660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.431686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.431770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.431796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.431877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.431902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.432002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.432041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.432133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.432161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.432240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.432268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.432362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.432389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.960 [2024-11-20 06:40:24.432477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.960 [2024-11-20 06:40:24.432505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.960 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.432602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.432629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.432712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.432739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.432817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.432843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.432947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.432975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.433061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.433088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.433187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.433216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.433310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.433338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.433419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.433445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.433525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.433552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.433665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.433691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.433770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.433796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.433874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.433900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.433982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.434010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.434090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.434116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.434229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.434257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.434355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.434382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.434474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.434502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.434586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.434612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.434690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.434716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.434801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.434826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.434938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.434965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:52.961 [2024-11-20 06:40:24.435047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.435075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.435169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.435196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.435277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.435312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.435401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:52.961 [2024-11-20 06:40:24.435427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.435504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.435530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.961 [2024-11-20 06:40:24.435612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.435640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.435728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.435755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.961 [2024-11-20 06:40:24.435842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.435875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.435990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.436018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.436098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.436132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.436228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.436256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.436354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.436381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.436461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.961 [2024-11-20 06:40:24.436487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.961 qpair failed and we were unable to recover it. 00:29:52.961 [2024-11-20 06:40:24.436608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.436635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.436711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.436737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.436817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.436842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.436937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.436965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.437049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.437077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.437160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.437188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.437265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.437291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.437388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.437415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.437500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.437527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.437605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.437631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.437743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.437769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.437845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.437872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.437957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.437985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.438089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.438115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.438191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.438217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.438331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.438358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.438436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.438467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.438548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.438573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.438659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.438684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.438765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.438792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.438887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.438913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.438994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.439021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.439103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.439130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.439212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.439237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.439346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.439373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.439453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.439479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.439558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.962 [2024-11-20 06:40:24.439583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.962 qpair failed and we were unable to recover it. 00:29:52.962 [2024-11-20 06:40:24.439673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.439699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.439777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.439803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.439908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.439933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.440020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.440046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.440121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.440146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.440224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.440250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.440335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.440362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.440453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.440479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.440561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.440587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.440668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.440696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.440774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.440800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.440886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.440912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.441018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.441044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.441127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.441154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.441235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.441260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.441348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.441378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.441494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.441525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.441611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.441636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.441730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.441757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.441840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.441867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.442057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.442083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.442164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.442189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.442268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.442294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.442393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.442419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.442500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.442527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.442644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.442669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.442752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.442778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.442891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.442918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.443016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.443056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.443162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.443201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.443309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.443339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.443436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.963 [2024-11-20 06:40:24.443463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.963 qpair failed and we were unable to recover it. 00:29:52.963 [2024-11-20 06:40:24.443553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.443580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.443706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.443733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.443819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.443846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.443932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.443957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.444037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.444064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.444147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.444173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.444268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.444297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.444407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.444434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.444525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.444552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.444630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.444664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.444746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.444774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.444851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.444879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.444967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.444995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.445090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.445118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.445205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.445231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.445317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.445345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.445434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.445460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.445552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.445578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.445696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.445722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.445818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.445843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.445930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.445958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.446044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.446071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.446181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.446207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.446289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.446323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.446411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.446442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.446524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.446550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.446635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.446661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.446754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.446780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.446877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.446906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.446996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.447023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.447111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.447138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.964 [2024-11-20 06:40:24.447214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.964 [2024-11-20 06:40:24.447240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.964 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.447320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.447346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.447430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.447456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.447541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.447567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.447653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.447678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.447784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.447810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.447892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.447920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.448044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.448072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.448150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.448176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.448259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.448286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.448385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.448411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.448517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.448543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.448660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.448687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.448828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.448855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.448935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.448961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.449041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.449068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.449194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.449234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.449331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.449367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.449457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.449484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.449575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.449603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.449715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.449747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.449834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.449862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.449953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.449980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.450074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.450113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.450230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.450259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.450357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.450385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.450467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.450493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.450568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.450594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.450683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.450709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.450826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.450853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.450946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.450974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.451075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.965 [2024-11-20 06:40:24.451115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.965 qpair failed and we were unable to recover it. 00:29:52.965 [2024-11-20 06:40:24.451256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.451284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.451391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.451417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.451510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.451538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.451654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.451681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.451769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.451796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.451877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.451903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.451991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.452021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.452131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.452158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.452250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.452278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.452396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.452424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.452510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.452537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.452616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.452642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.452753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.452780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.452859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.452884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.452973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.453002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.453104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.453143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.453231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.453259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.453354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.453381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.453458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.453484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.453593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.453619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.453700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.453725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.453809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.453835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.453947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.453975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.454084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.454112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.454199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.454227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.454313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.454340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.454425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.454453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.454547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.454575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.454665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.454697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.454788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.454815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.454924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.454950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.966 [2024-11-20 06:40:24.455084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.966 [2024-11-20 06:40:24.455110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.966 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.455198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.455227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.455314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.455342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.455431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.455459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.455547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.455573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.455670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.455696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.455780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.455806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.455920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.455947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.456029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.456055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.456150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.456178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.456254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.456281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.456451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.456490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.456591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.456618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.456708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.456735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.456844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.456869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.456946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.456972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.457045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.457071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.457202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.457241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.457333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.457364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.457455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.457484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.457577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.457603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.457746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.457774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.457865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.457892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.457976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.458003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.458086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.458118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.458227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.458254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.458346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.458374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.967 qpair failed and we were unable to recover it. 00:29:52.967 [2024-11-20 06:40:24.458461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.967 [2024-11-20 06:40:24.458489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.458595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.458634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.458755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.458782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.458860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.458886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.458969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.458996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.459103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.459129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.459221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.459260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.459362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.459391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.459481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.459508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.459615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.459642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.459749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.459776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.459870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.459899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.459983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.460011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.460174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.460214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.460328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.460357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.460445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.460472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.460557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.460583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.460704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.460732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.460822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.460849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.460936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.460964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.461046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.461072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.461147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.461173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.461248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.461274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.461376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.461405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.461492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.461519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.461636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.461661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.461771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.461797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.461929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.461955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.462039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.462067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.462149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.462176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.462255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.968 [2024-11-20 06:40:24.462284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.968 qpair failed and we were unable to recover it. 00:29:52.968 [2024-11-20 06:40:24.462391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.462418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.462498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.462525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.462635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.462661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.462744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.462771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.462849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.462874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.462949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.462974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.463051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.463082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.463170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.463196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.463281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.463315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.463407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.463434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.463563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.463591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.463689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.463715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.463795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.463822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.463903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.463931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.464009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.464035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.464114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.464140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.464252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.464279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.464384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.464412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.464489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.464515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.464588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.464613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.464731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.464757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.464833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.464858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.464951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.464980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.465078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.465117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.465236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.465264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.465351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.465378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.465464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.465491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.465583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.465610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.465691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.465718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.465848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.465875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.465958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.465985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.466071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.466098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.969 [2024-11-20 06:40:24.466180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.969 [2024-11-20 06:40:24.466207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.969 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.466321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.466353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.466464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.466490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.466573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.466600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.466687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.466713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.466796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.466823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.466900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.466927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.467051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.467091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.467177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.467205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.467325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.467352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.467432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.467458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.467557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.467583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.467705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.467731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.467812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.467838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.467914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.467940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.468029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.468056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.468182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.468222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.468326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.468365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.468446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.468473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.468560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.468588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.468711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.468739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.468849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.468876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.468964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.468991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.469095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.469134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.469225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.469253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.469343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.469371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.469450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.469477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.469584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.469610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.469698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.469724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.469843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.469869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.469979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.470005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.470099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.470125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.470228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.970 [2024-11-20 06:40:24.470254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.970 qpair failed and we were unable to recover it. 00:29:52.970 [2024-11-20 06:40:24.470343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.470369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.470454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.470480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.470565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.470591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.470712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.470738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.470821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.470847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.470939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.470965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.471056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.471096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.471202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.471242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.471346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.471380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.471508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.471535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.471632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.471660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.471737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.471763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.471847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.471875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.471955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.471981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.472058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.472084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.472192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.472218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.472318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.472365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.472475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.472515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.472598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.472626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.472726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.472753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.472832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.472859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.472939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.472966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.473060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.473086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.473179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.473218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.473313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.473341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.473436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.473463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.473541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.473567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.473646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.473672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.473763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.473790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.473890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.473917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.473998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.474024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.474109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.474136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.474217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.474244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.474358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.971 [2024-11-20 06:40:24.474388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.971 qpair failed and we were unable to recover it. 00:29:52.971 [2024-11-20 06:40:24.474479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.474508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.474589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.474616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.474698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.474724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.474838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.474864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.474945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.474971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.475053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.475079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.475185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.475211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.475308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.475336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.475423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.475451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.475543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.475572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.475665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.475694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.475782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.475809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.475892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.475919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.476007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.476034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.476130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.476175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.476263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.476290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.476394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.476421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.476499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.476525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.476612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.476638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.476730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.476756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.476840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.476868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.476953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.476982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.477083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.477111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.477224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.477251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.477340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.477366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.477473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.477499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.477617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.477644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.477729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.477755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.477844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.477872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.477949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.477976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.478050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.478076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.972 [2024-11-20 06:40:24.478161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.972 [2024-11-20 06:40:24.478187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.972 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.478270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.478296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.478400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.478426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.478509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.478536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.478631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.478659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.478742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.478771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.478862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.478889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.478968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.478995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.479109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.479136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.479222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.479249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.479336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.479367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.479455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.479481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.479562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.479588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.479678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.479704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.479786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.479815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.479907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.479935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.480034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.480074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.480175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.480203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.480287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.480320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.480405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.480431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.480519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.480545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.480655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.480681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.480763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.480791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.480874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.480902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.973 [2024-11-20 06:40:24.480990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.973 [2024-11-20 06:40:24.481017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.973 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.481094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.481120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.481214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.481241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.481332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.481360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.481443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.481469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.481589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.481614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.481730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.481757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.481847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.481875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.481956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.481984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.482076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.482103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.482187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.482213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.482307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.482335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.482425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.482451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.482570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.482599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.482692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.482717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.482832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.482858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.482941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.482968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.483095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.483134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.483248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.483287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.483396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.483423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.483504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.483529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.483610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.483635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.483769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.483795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.483882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.483910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.484001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.484030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.484130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.484170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.484259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.484286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.484405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.484433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 Malloc0 00:29:52.974 [2024-11-20 06:40:24.484523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.484549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.484633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.484659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 [2024-11-20 06:40:24.484782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.974 [2024-11-20 06:40:24.484811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.974 qpair failed and we were unable to recover it. 00:29:52.974 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.974 [2024-11-20 06:40:24.484892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.484917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.485014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:29:52.975 [2024-11-20 06:40:24.485053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.485142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.975 [2024-11-20 06:40:24.485171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.485252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.975 [2024-11-20 06:40:24.485280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.485377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.485405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.485498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.485524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.485613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.485641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.485743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.485769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.485858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.485884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.485977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.486006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.486094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.486120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.486208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.486234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.486319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.486345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.486436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.486461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.486538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.486564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.486648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.486674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.486761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.486787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.486875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.486902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.487023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.487051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.487162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.487202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.487291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.487328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.487424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.487451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.487536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.487562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.487673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.487699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.487784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.487811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.487899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.487928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.488024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.975 [2024-11-20 06:40:24.488064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.975 qpair failed and we were unable to recover it. 00:29:52.975 [2024-11-20 06:40:24.488162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.488170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:52.976 [2024-11-20 06:40:24.488190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.488273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.488299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.488389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.488416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.488506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.488532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.488631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.488658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.488743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.488770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.488883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.488908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.489002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.489028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.489111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.489136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.489220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.489248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.489342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.489368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.489459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.489486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.489567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.489593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.489669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.489695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.489780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.489806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.489894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.489920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.489999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.490024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.490115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.490143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.490232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.490259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.490354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.490383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.490470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.490497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.490582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.490608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.490712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.490738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.490848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.490875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.490954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.490982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.491062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.491088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.491196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.491222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.491311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.491338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.491416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.491442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.491532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.491558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.491644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.491671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.491752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.491779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.491869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.491898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.491984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.492010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.492099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.492125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.492235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.492262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.492361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.492388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.492469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.492495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.492575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.492601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.492680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.492705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.492786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.492812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.492895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.492922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.493036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.493063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.493163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.493202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.493294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.493329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.493420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.493447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.493527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.493553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.493639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.493665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.493776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.493803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.493916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.493943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.494025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.494052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.494137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.494163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.494268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.494293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.494383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.494409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.494496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.494523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.494640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.494666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.494748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.494775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.494866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.494895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.494984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.495012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.495130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.495158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.495242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.495274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.495366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.495393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.495472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.495498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.495587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.495613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.495696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.495722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.495808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.495834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.495923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.495950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.496065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.496094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.496199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.976 [2024-11-20 06:40:24.496238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.976 qpair failed and we were unable to recover it. 00:29:52.976 [2024-11-20 06:40:24.496336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.496366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.977 [2024-11-20 06:40:24.496448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.496475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:52.977 [2024-11-20 06:40:24.496561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.496587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.977 [2024-11-20 06:40:24.496663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.496695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.977 [2024-11-20 06:40:24.496791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.496817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.496900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.496926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.497007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.497035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.497137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.497177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.497272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.497315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.497404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.497430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.497524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.497550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.497634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.497660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.497769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.497796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.497878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.497906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.498003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.498029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.498114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.498140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.498221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.498253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.498335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.498362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.498447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.498473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.498558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.498584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.498663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.498689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.498765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.498792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.498910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.498939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.499026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.499053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.499137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.499163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.499248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.499274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.499375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.499414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.499506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.499535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.499618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.499644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.499725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.499752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.499838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.499865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.499948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.499976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.500063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.500090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.500194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.500233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.500328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.500356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.500441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.500467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.500552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.500577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.500667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.500693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.500794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.500822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.500911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.500937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.501021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.501048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.501133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.501159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.501269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.501294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.501397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.501423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.501501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.501527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.501604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.501630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.501711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.501739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.501826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.501853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.501945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.501974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.502053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.502079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.502164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.502191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.502265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.502291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.502383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.502410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.502529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.502568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.502654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.502681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.502758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.502785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.502869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.502900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.502986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.503014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.503101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.503129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.977 [2024-11-20 06:40:24.503211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.977 [2024-11-20 06:40:24.503238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.977 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.503321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.503348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.503428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.503454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.503530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.503556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.503634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.503660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.503770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.503796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.503902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.503928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.504011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.504038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.504118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.504146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.504231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.504259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.504351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.504377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.978 [2024-11-20 06:40:24.504471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.504497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.504579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:52.978 [2024-11-20 06:40:24.504605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.504696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.504722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.978 [2024-11-20 06:40:24.504808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.504833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.978 [2024-11-20 06:40:24.504921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.504947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.505056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.505081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.505161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.505188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.505276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.505311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.505407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.505434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.505511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.505538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.505651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.505678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.505762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.505799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.505881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.505908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.506007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.506034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.506118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.506144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.506227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.506254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.506335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.506362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.506450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.506478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.506560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.506586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.506672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.506700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.506808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.506834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.506914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.506941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.507024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.507051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.507130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.507156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.507232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.507259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.507377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.507417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.507511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.507538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.507651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.507678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.507768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.507795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.507911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.507950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.508038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.508066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.508150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.508178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.508258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.508284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.508378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.978 [2024-11-20 06:40:24.508405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.978 qpair failed and we were unable to recover it. 00:29:52.978 [2024-11-20 06:40:24.508487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.508514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.508604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.508631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.508712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.508738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.508851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.508878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.508968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.508995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.509083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.509110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.509200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.509227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.509322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.509351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.509462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.509488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.509574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.509601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.509682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.509708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.509803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.509830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.509912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.509937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.510021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.510046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.510130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.510157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.510242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.510268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.510350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.510376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.510458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.510490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.510568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.510595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.510702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.510728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.510811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.510837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.510932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.510960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.511042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.511070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.511170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.511210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.511296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.511329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.511411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.511437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.511524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.511550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.511642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.511671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.511753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.511778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.511856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.511883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.511972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.511998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.512085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.512110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.512197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.512223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.512419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.512446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.979 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 [2024-11-20 06:40:24.512539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.979 [2024-11-20 06:40:24.512566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.979 qpair failed and we were unable to recover it. 00:29:52.979 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:52.979 [2024-11-20 06:40:24.512655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.512680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.980 [2024-11-20 06:40:24.512767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.512793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.980 [2024-11-20 06:40:24.512871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.512897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.512976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.513001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.513089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.513116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.513199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.513226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.513316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.513344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.513426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.513456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.513545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.513570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.513653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.513679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.513764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.513790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.513876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.513901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.514012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.514037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.514111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.514138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.514221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.514247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.514339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.514367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.514449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.514474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.514553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.514579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.514683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.514709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.514796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.514823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.514910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.514936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.515026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.515053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.515132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.515158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.515238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.515264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.515370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.515408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.515491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.515517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.515707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.515733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.515816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.515842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.515943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.515969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x205ffa0 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.516063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.516102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe75c000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.516193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.516224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe760000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.516316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:52.980 [2024-11-20 06:40:24.516346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fe768000b90 with addr=10.0.0.2, port=4420 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 [2024-11-20 06:40:24.516447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:52.980 [2024-11-20 06:40:24.519005] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.980 [2024-11-20 06:40:24.519110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.980 [2024-11-20 06:40:24.519138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.980 [2024-11-20 06:40:24.519152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.980 [2024-11-20 06:40:24.519171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.980 [2024-11-20 06:40:24.519211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.980 qpair failed and we were unable to recover it. 00:29:52.980 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.980 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:52.980 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:29:52.980 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:52.980 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:29:52.981 06:40:24 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2206345 00:29:52.981 [2024-11-20 06:40:24.528808] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.528899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.528926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.528940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.528953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.528983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.538850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.538939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.538965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.538979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.538992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.539021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.548852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.548964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.548989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.549003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.549015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.549045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.558819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.558912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.558938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.558951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.558964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.558996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.568811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.568895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.568921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.568935] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.568947] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.568977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.578868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.578956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.578981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.578995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.579008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.579038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.588887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.588979] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.589005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.589019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.589031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.589061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.598907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.598997] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.599028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.599043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.599055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.599087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.609017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.609130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.609156] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.609170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.609183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.609212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.618961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.619047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.619073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.619086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.619099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.619129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.628969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.981 [2024-11-20 06:40:24.629059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.981 [2024-11-20 06:40:24.629084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.981 [2024-11-20 06:40:24.629098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.981 [2024-11-20 06:40:24.629111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.981 [2024-11-20 06:40:24.629140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.981 qpair failed and we were unable to recover it. 00:29:52.981 [2024-11-20 06:40:24.639067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.639154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.639179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.639193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.639211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.639241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.649009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.649092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.649117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.649131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.649143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.649173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.659179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.659262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.659288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.659309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.659323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.659355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.669128] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.669218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.669244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.669257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.669270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.669299] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.679118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.679212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.679237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.679252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.679264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.679294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.689138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.689230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.689259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.689275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.689288] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.689328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.699172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.699310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.699336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.699350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.699362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.699393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.709238] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.709336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.709366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.709381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.709394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.709424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.719250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.719344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.719370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.719384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.719396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.719428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.729251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.729348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.729383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.729398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.729410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.729441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.739273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.739366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.739392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.739406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.739418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.739449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.749323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.749416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.749441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.749455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.749468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.749498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.759349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.982 [2024-11-20 06:40:24.759436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.982 [2024-11-20 06:40:24.759461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.982 [2024-11-20 06:40:24.759475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.982 [2024-11-20 06:40:24.759487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.982 [2024-11-20 06:40:24.759519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.982 qpair failed and we were unable to recover it. 00:29:52.982 [2024-11-20 06:40:24.769363] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:52.983 [2024-11-20 06:40:24.769448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:52.983 [2024-11-20 06:40:24.769474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:52.983 [2024-11-20 06:40:24.769494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:52.983 [2024-11-20 06:40:24.769508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:52.983 [2024-11-20 06:40:24.769539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:52.983 qpair failed and we were unable to recover it. 00:29:53.242 [2024-11-20 06:40:24.779393] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.242 [2024-11-20 06:40:24.779478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.242 [2024-11-20 06:40:24.779503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.242 [2024-11-20 06:40:24.779517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.242 [2024-11-20 06:40:24.779530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.242 [2024-11-20 06:40:24.779559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.242 qpair failed and we were unable to recover it. 00:29:53.242 [2024-11-20 06:40:24.789475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.242 [2024-11-20 06:40:24.789566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.242 [2024-11-20 06:40:24.789592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.242 [2024-11-20 06:40:24.789606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.242 [2024-11-20 06:40:24.789618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.242 [2024-11-20 06:40:24.789648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.242 qpair failed and we were unable to recover it. 00:29:53.242 [2024-11-20 06:40:24.799471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.242 [2024-11-20 06:40:24.799561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.242 [2024-11-20 06:40:24.799587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.242 [2024-11-20 06:40:24.799601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.242 [2024-11-20 06:40:24.799613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.242 [2024-11-20 06:40:24.799645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.242 qpair failed and we were unable to recover it. 00:29:53.242 [2024-11-20 06:40:24.809495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.242 [2024-11-20 06:40:24.809623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.242 [2024-11-20 06:40:24.809648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.242 [2024-11-20 06:40:24.809662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.242 [2024-11-20 06:40:24.809675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.242 [2024-11-20 06:40:24.809711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.242 qpair failed and we were unable to recover it. 00:29:53.242 [2024-11-20 06:40:24.819538] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.242 [2024-11-20 06:40:24.819624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.242 [2024-11-20 06:40:24.819648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.242 [2024-11-20 06:40:24.819662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.242 [2024-11-20 06:40:24.819674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.242 [2024-11-20 06:40:24.819703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.242 qpair failed and we were unable to recover it. 00:29:53.242 [2024-11-20 06:40:24.829573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.242 [2024-11-20 06:40:24.829667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.242 [2024-11-20 06:40:24.829694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.242 [2024-11-20 06:40:24.829708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.242 [2024-11-20 06:40:24.829721] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.242 [2024-11-20 06:40:24.829751] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.242 qpair failed and we were unable to recover it. 00:29:53.242 [2024-11-20 06:40:24.839574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.242 [2024-11-20 06:40:24.839662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.242 [2024-11-20 06:40:24.839687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.242 [2024-11-20 06:40:24.839700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.242 [2024-11-20 06:40:24.839713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.242 [2024-11-20 06:40:24.839744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.242 qpair failed and we were unable to recover it. 00:29:53.242 [2024-11-20 06:40:24.849635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.242 [2024-11-20 06:40:24.849726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.242 [2024-11-20 06:40:24.849752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.242 [2024-11-20 06:40:24.849766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.849778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.849808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.859628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.859728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.859753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.243 [2024-11-20 06:40:24.859767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.859780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.859810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.869671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.869769] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.869795] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.243 [2024-11-20 06:40:24.869808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.869821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.869850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.879707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.879790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.879822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.243 [2024-11-20 06:40:24.879837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.879850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.879879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.889726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.889811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.889836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.243 [2024-11-20 06:40:24.889851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.889864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.889893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.899739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.899831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.899856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.243 [2024-11-20 06:40:24.899876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.899890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.899920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.909796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.909892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.909917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.243 [2024-11-20 06:40:24.909931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.909944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.909974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.919842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.919921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.919946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.243 [2024-11-20 06:40:24.919960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.919972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.920003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.929837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.929926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.929952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.243 [2024-11-20 06:40:24.929965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.929978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.930007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.939878] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.939962] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.939988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.243 [2024-11-20 06:40:24.940002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.940014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.940053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.949891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.949984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.950009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.243 [2024-11-20 06:40:24.950023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.243 [2024-11-20 06:40:24.950035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.243 [2024-11-20 06:40:24.950065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.243 qpair failed and we were unable to recover it. 00:29:53.243 [2024-11-20 06:40:24.959948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.243 [2024-11-20 06:40:24.960037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.243 [2024-11-20 06:40:24.960063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.244 [2024-11-20 06:40:24.960077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.244 [2024-11-20 06:40:24.960089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.244 [2024-11-20 06:40:24.960121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.244 qpair failed and we were unable to recover it. 00:29:53.244 [2024-11-20 06:40:24.969985] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.244 [2024-11-20 06:40:24.970071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.244 [2024-11-20 06:40:24.970099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.244 [2024-11-20 06:40:24.970114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.244 [2024-11-20 06:40:24.970127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.244 [2024-11-20 06:40:24.970157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.244 qpair failed and we were unable to recover it. 00:29:53.244 [2024-11-20 06:40:24.979976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.244 [2024-11-20 06:40:24.980068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.244 [2024-11-20 06:40:24.980098] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.244 [2024-11-20 06:40:24.980116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.244 [2024-11-20 06:40:24.980128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.244 [2024-11-20 06:40:24.980161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.244 qpair failed and we were unable to recover it. 00:29:53.244 [2024-11-20 06:40:24.990050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.244 [2024-11-20 06:40:24.990165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.244 [2024-11-20 06:40:24.990192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.244 [2024-11-20 06:40:24.990205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.244 [2024-11-20 06:40:24.990219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.244 [2024-11-20 06:40:24.990248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.244 qpair failed and we were unable to recover it. 00:29:53.244 [2024-11-20 06:40:25.000035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.244 [2024-11-20 06:40:25.000123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.244 [2024-11-20 06:40:25.000149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.244 [2024-11-20 06:40:25.000164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.244 [2024-11-20 06:40:25.000177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.244 [2024-11-20 06:40:25.000207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.244 qpair failed and we were unable to recover it. 00:29:53.244 [2024-11-20 06:40:25.010062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.244 [2024-11-20 06:40:25.010145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.244 [2024-11-20 06:40:25.010171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.244 [2024-11-20 06:40:25.010185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.244 [2024-11-20 06:40:25.010197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.244 [2024-11-20 06:40:25.010228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.244 qpair failed and we were unable to recover it. 00:29:53.244 [2024-11-20 06:40:25.020110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.244 [2024-11-20 06:40:25.020211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.244 [2024-11-20 06:40:25.020237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.244 [2024-11-20 06:40:25.020251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.244 [2024-11-20 06:40:25.020263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.244 [2024-11-20 06:40:25.020294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.244 qpair failed and we were unable to recover it. 00:29:53.244 [2024-11-20 06:40:25.030123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.244 [2024-11-20 06:40:25.030225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.244 [2024-11-20 06:40:25.030256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.244 [2024-11-20 06:40:25.030270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.244 [2024-11-20 06:40:25.030283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.244 [2024-11-20 06:40:25.030321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.244 qpair failed and we were unable to recover it. 00:29:53.244 [2024-11-20 06:40:25.040183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.244 [2024-11-20 06:40:25.040266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.244 [2024-11-20 06:40:25.040292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.244 [2024-11-20 06:40:25.040313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.244 [2024-11-20 06:40:25.040327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.244 [2024-11-20 06:40:25.040358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.244 qpair failed and we were unable to recover it. 00:29:53.244 [2024-11-20 06:40:25.050195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.244 [2024-11-20 06:40:25.050323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.244 [2024-11-20 06:40:25.050349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.244 [2024-11-20 06:40:25.050363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.244 [2024-11-20 06:40:25.050376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.244 [2024-11-20 06:40:25.050406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.244 qpair failed and we were unable to recover it. 00:29:53.244 [2024-11-20 06:40:25.060215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.244 [2024-11-20 06:40:25.060344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.244 [2024-11-20 06:40:25.060370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.245 [2024-11-20 06:40:25.060384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.245 [2024-11-20 06:40:25.060397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.245 [2024-11-20 06:40:25.060427] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.245 qpair failed and we were unable to recover it. 00:29:53.245 [2024-11-20 06:40:25.070252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.245 [2024-11-20 06:40:25.070349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.245 [2024-11-20 06:40:25.070375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.245 [2024-11-20 06:40:25.070389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.245 [2024-11-20 06:40:25.070408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.245 [2024-11-20 06:40:25.070439] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.245 qpair failed and we were unable to recover it. 00:29:53.504 [2024-11-20 06:40:25.080265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.504 [2024-11-20 06:40:25.080359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.504 [2024-11-20 06:40:25.080385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.504 [2024-11-20 06:40:25.080400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.504 [2024-11-20 06:40:25.080413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.504 [2024-11-20 06:40:25.080442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.504 qpair failed and we were unable to recover it. 00:29:53.504 [2024-11-20 06:40:25.090299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.504 [2024-11-20 06:40:25.090396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.504 [2024-11-20 06:40:25.090421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.504 [2024-11-20 06:40:25.090435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.504 [2024-11-20 06:40:25.090447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.504 [2024-11-20 06:40:25.090477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.504 qpair failed and we were unable to recover it. 00:29:53.504 [2024-11-20 06:40:25.100358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.504 [2024-11-20 06:40:25.100467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.504 [2024-11-20 06:40:25.100492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.504 [2024-11-20 06:40:25.100506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.504 [2024-11-20 06:40:25.100518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.504 [2024-11-20 06:40:25.100550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.504 qpair failed and we were unable to recover it. 00:29:53.504 [2024-11-20 06:40:25.110368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.504 [2024-11-20 06:40:25.110456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.504 [2024-11-20 06:40:25.110481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.504 [2024-11-20 06:40:25.110495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.504 [2024-11-20 06:40:25.110508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.504 [2024-11-20 06:40:25.110540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.504 qpair failed and we were unable to recover it. 00:29:53.504 [2024-11-20 06:40:25.120390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.504 [2024-11-20 06:40:25.120481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.504 [2024-11-20 06:40:25.120506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.504 [2024-11-20 06:40:25.120520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.504 [2024-11-20 06:40:25.120532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.504 [2024-11-20 06:40:25.120563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.504 qpair failed and we were unable to recover it. 00:29:53.504 [2024-11-20 06:40:25.130450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.504 [2024-11-20 06:40:25.130549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.504 [2024-11-20 06:40:25.130575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.504 [2024-11-20 06:40:25.130589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.504 [2024-11-20 06:40:25.130601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.504 [2024-11-20 06:40:25.130631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.504 qpair failed and we were unable to recover it. 00:29:53.504 [2024-11-20 06:40:25.140460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.504 [2024-11-20 06:40:25.140552] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.504 [2024-11-20 06:40:25.140576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.504 [2024-11-20 06:40:25.140589] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.504 [2024-11-20 06:40:25.140601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.504 [2024-11-20 06:40:25.140630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.504 qpair failed and we were unable to recover it. 00:29:53.504 [2024-11-20 06:40:25.150499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.150589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.150615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.150629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.150642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.150672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.160536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.160620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.160651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.160665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.160678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.160708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.170528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.170618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.170644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.170664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.170677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.170709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.180582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.180670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.180696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.180710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.180723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.180753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.190620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.190718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.190747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.190764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.190776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.190807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.200609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.200699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.200726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.200741] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.200759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.200791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.210673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.210757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.210783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.210797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.210810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.210842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.220709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.220792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.220817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.220831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.220844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.220875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.230710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.230798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.230824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.230838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.230850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.230879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.240721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.240806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.240831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.240846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.240858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.240887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.250786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.250867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.250896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.250911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.250923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.250953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.260825] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.260911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.260937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.260951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.260964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.260993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.270832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.505 [2024-11-20 06:40:25.270920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.505 [2024-11-20 06:40:25.270945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.505 [2024-11-20 06:40:25.270959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.505 [2024-11-20 06:40:25.270971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.505 [2024-11-20 06:40:25.271004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.505 qpair failed and we were unable to recover it. 00:29:53.505 [2024-11-20 06:40:25.280837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.506 [2024-11-20 06:40:25.280920] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.506 [2024-11-20 06:40:25.280945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.506 [2024-11-20 06:40:25.280959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.506 [2024-11-20 06:40:25.280971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.506 [2024-11-20 06:40:25.281001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.506 qpair failed and we were unable to recover it. 00:29:53.506 [2024-11-20 06:40:25.291764] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.506 [2024-11-20 06:40:25.291868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.506 [2024-11-20 06:40:25.291899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.506 [2024-11-20 06:40:25.291914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.506 [2024-11-20 06:40:25.291926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.506 [2024-11-20 06:40:25.291957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.506 qpair failed and we were unable to recover it. 00:29:53.506 [2024-11-20 06:40:25.300974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.506 [2024-11-20 06:40:25.301058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.506 [2024-11-20 06:40:25.301083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.506 [2024-11-20 06:40:25.301097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.506 [2024-11-20 06:40:25.301110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.506 [2024-11-20 06:40:25.301139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.506 qpair failed and we were unable to recover it. 00:29:53.506 [2024-11-20 06:40:25.310993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.506 [2024-11-20 06:40:25.311107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.506 [2024-11-20 06:40:25.311132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.506 [2024-11-20 06:40:25.311147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.506 [2024-11-20 06:40:25.311160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.506 [2024-11-20 06:40:25.311189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.506 qpair failed and we were unable to recover it. 00:29:53.506 [2024-11-20 06:40:25.321047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.506 [2024-11-20 06:40:25.321145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.506 [2024-11-20 06:40:25.321170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.506 [2024-11-20 06:40:25.321184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.506 [2024-11-20 06:40:25.321197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.506 [2024-11-20 06:40:25.321226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.506 qpair failed and we were unable to recover it. 00:29:53.506 [2024-11-20 06:40:25.331004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.506 [2024-11-20 06:40:25.331091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.506 [2024-11-20 06:40:25.331117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.506 [2024-11-20 06:40:25.331137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.506 [2024-11-20 06:40:25.331150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.506 [2024-11-20 06:40:25.331181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.506 qpair failed and we were unable to recover it. 00:29:53.765 [2024-11-20 06:40:25.341055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.765 [2024-11-20 06:40:25.341167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.765 [2024-11-20 06:40:25.341192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.765 [2024-11-20 06:40:25.341206] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.765 [2024-11-20 06:40:25.341218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.765 [2024-11-20 06:40:25.341250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.765 qpair failed and we were unable to recover it. 00:29:53.765 [2024-11-20 06:40:25.351073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.766 [2024-11-20 06:40:25.351163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.766 [2024-11-20 06:40:25.351189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.766 [2024-11-20 06:40:25.351203] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.766 [2024-11-20 06:40:25.351216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.766 [2024-11-20 06:40:25.351245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.766 qpair failed and we were unable to recover it. 00:29:53.766 [2024-11-20 06:40:25.361079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.766 [2024-11-20 06:40:25.361171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.766 [2024-11-20 06:40:25.361197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.766 [2024-11-20 06:40:25.361211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.766 [2024-11-20 06:40:25.361223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.766 [2024-11-20 06:40:25.361253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.766 qpair failed and we were unable to recover it. 00:29:53.766 [2024-11-20 06:40:25.371198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.766 [2024-11-20 06:40:25.371286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.766 [2024-11-20 06:40:25.371321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.766 [2024-11-20 06:40:25.371336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.766 [2024-11-20 06:40:25.371349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.766 [2024-11-20 06:40:25.371386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.766 qpair failed and we were unable to recover it. 00:29:53.766 [2024-11-20 06:40:25.381193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.766 [2024-11-20 06:40:25.381318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.766 [2024-11-20 06:40:25.381374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.766 [2024-11-20 06:40:25.381390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.766 [2024-11-20 06:40:25.381403] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.766 [2024-11-20 06:40:25.381446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.766 qpair failed and we were unable to recover it. 00:29:53.766 [2024-11-20 06:40:25.391168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.766 [2024-11-20 06:40:25.391314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.766 [2024-11-20 06:40:25.391341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.766 [2024-11-20 06:40:25.391355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.766 [2024-11-20 06:40:25.391367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.766 [2024-11-20 06:40:25.391398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.766 qpair failed and we were unable to recover it. 00:29:53.766 [2024-11-20 06:40:25.401221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.766 [2024-11-20 06:40:25.401315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.766 [2024-11-20 06:40:25.401341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.766 [2024-11-20 06:40:25.401355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.766 [2024-11-20 06:40:25.401369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.766 [2024-11-20 06:40:25.401400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.766 qpair failed and we were unable to recover it. 00:29:53.766 [2024-11-20 06:40:25.411203] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.766 [2024-11-20 06:40:25.411296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.766 [2024-11-20 06:40:25.411329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.766 [2024-11-20 06:40:25.411343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.766 [2024-11-20 06:40:25.411356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.766 [2024-11-20 06:40:25.411388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.766 qpair failed and we were unable to recover it. 00:29:53.766 [2024-11-20 06:40:25.421233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.766 [2024-11-20 06:40:25.421330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.766 [2024-11-20 06:40:25.421356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.766 [2024-11-20 06:40:25.421370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.766 [2024-11-20 06:40:25.421383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.766 [2024-11-20 06:40:25.421414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.766 qpair failed and we were unable to recover it. 00:29:53.766 [2024-11-20 06:40:25.431273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.766 [2024-11-20 06:40:25.431373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.766 [2024-11-20 06:40:25.431400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.766 [2024-11-20 06:40:25.431413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.766 [2024-11-20 06:40:25.431426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.766 [2024-11-20 06:40:25.431456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.766 qpair failed and we were unable to recover it. 00:29:53.766 [2024-11-20 06:40:25.441282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.766 [2024-11-20 06:40:25.441399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.766 [2024-11-20 06:40:25.441425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.766 [2024-11-20 06:40:25.441438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.766 [2024-11-20 06:40:25.441451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.766 [2024-11-20 06:40:25.441483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.766 qpair failed and we were unable to recover it. 00:29:53.766 [2024-11-20 06:40:25.451373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.451494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.451520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.451535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.451548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.451577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.461399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.461509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.461535] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.461557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.461571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.461601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.471439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.471533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.471559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.471572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.471586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.471626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.481420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.481556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.481582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.481596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.481610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.481639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.491447] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.491574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.491600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.491614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.491626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.491655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.501519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.501642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.501668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.501681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.501694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.501730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.511511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.511602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.511628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.511642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.511655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.511685] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.521531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.521614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.521640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.521653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.521666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.521697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.531597] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.531684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.531710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.531724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.531737] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.531766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.541590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.541677] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.541703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.541717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.541730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.541760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.551635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.551724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.551749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.767 [2024-11-20 06:40:25.551763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.767 [2024-11-20 06:40:25.551776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.767 [2024-11-20 06:40:25.551808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.767 qpair failed and we were unable to recover it. 00:29:53.767 [2024-11-20 06:40:25.561672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.767 [2024-11-20 06:40:25.561752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.767 [2024-11-20 06:40:25.561778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.768 [2024-11-20 06:40:25.561792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.768 [2024-11-20 06:40:25.561805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.768 [2024-11-20 06:40:25.561835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.768 qpair failed and we were unable to recover it. 00:29:53.768 [2024-11-20 06:40:25.571646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.768 [2024-11-20 06:40:25.571725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.768 [2024-11-20 06:40:25.571751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.768 [2024-11-20 06:40:25.571764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.768 [2024-11-20 06:40:25.571777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.768 [2024-11-20 06:40:25.571809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.768 qpair failed and we were unable to recover it. 00:29:53.768 [2024-11-20 06:40:25.581699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.768 [2024-11-20 06:40:25.581784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.768 [2024-11-20 06:40:25.581810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.768 [2024-11-20 06:40:25.581824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.768 [2024-11-20 06:40:25.581836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.768 [2024-11-20 06:40:25.581884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.768 qpair failed and we were unable to recover it. 00:29:53.768 [2024-11-20 06:40:25.591758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:53.768 [2024-11-20 06:40:25.591849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:53.768 [2024-11-20 06:40:25.591883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:53.768 [2024-11-20 06:40:25.591898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:53.768 [2024-11-20 06:40:25.591911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:53.768 [2024-11-20 06:40:25.591941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:53.768 qpair failed and we were unable to recover it. 00:29:54.027 [2024-11-20 06:40:25.601773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.027 [2024-11-20 06:40:25.601859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.027 [2024-11-20 06:40:25.601886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.027 [2024-11-20 06:40:25.601899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.027 [2024-11-20 06:40:25.601912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.027 [2024-11-20 06:40:25.601941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-11-20 06:40:25.611763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.027 [2024-11-20 06:40:25.611844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.027 [2024-11-20 06:40:25.611869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.027 [2024-11-20 06:40:25.611883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.027 [2024-11-20 06:40:25.611896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.027 [2024-11-20 06:40:25.611925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-11-20 06:40:25.621811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.027 [2024-11-20 06:40:25.621940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.027 [2024-11-20 06:40:25.621966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.027 [2024-11-20 06:40:25.621980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.027 [2024-11-20 06:40:25.621992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.027 [2024-11-20 06:40:25.622021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-11-20 06:40:25.631876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.027 [2024-11-20 06:40:25.631969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.027 [2024-11-20 06:40:25.631997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.027 [2024-11-20 06:40:25.632012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.027 [2024-11-20 06:40:25.632033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.027 [2024-11-20 06:40:25.632065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-11-20 06:40:25.641886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.027 [2024-11-20 06:40:25.642000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.027 [2024-11-20 06:40:25.642026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.027 [2024-11-20 06:40:25.642039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.027 [2024-11-20 06:40:25.642052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.027 [2024-11-20 06:40:25.642084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-11-20 06:40:25.651902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.027 [2024-11-20 06:40:25.651988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.027 [2024-11-20 06:40:25.652013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.027 [2024-11-20 06:40:25.652027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.027 [2024-11-20 06:40:25.652040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.027 [2024-11-20 06:40:25.652071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-11-20 06:40:25.661951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.027 [2024-11-20 06:40:25.662041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.027 [2024-11-20 06:40:25.662067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.027 [2024-11-20 06:40:25.662081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.027 [2024-11-20 06:40:25.662094] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.027 [2024-11-20 06:40:25.662123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-11-20 06:40:25.671975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.027 [2024-11-20 06:40:25.672098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.027 [2024-11-20 06:40:25.672124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.027 [2024-11-20 06:40:25.672138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.027 [2024-11-20 06:40:25.672151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.027 [2024-11-20 06:40:25.672182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.027 qpair failed and we were unable to recover it. 00:29:54.027 [2024-11-20 06:40:25.682004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.027 [2024-11-20 06:40:25.682091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.027 [2024-11-20 06:40:25.682117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.027 [2024-11-20 06:40:25.682131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.027 [2024-11-20 06:40:25.682144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.682174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.692056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.692142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.692168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.692182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.692194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.692224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.702067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.702181] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.702207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.702221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.702234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.702264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.712067] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.712169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.712193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.712207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.712220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.712251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.722080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.722175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.722205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.722220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.722233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.722262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.732113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.732198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.732223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.732237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.732249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.732279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.742150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.742238] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.742264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.742277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.742290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.742329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.752186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.752272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.752297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.752319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.752333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.752363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.762214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.762340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.762367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.762381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.762399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.762430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.772239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.772335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.772365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.772381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.772394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.772425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.782267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.782397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.782423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.782438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.782451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.782482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.792311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.792403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.792429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.792443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.792457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.792489] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.802364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.802447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.802473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.028 [2024-11-20 06:40:25.802486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.028 [2024-11-20 06:40:25.802499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.028 [2024-11-20 06:40:25.802531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.028 qpair failed and we were unable to recover it. 00:29:54.028 [2024-11-20 06:40:25.812377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.028 [2024-11-20 06:40:25.812467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.028 [2024-11-20 06:40:25.812492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.029 [2024-11-20 06:40:25.812506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.029 [2024-11-20 06:40:25.812519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.029 [2024-11-20 06:40:25.812549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-11-20 06:40:25.822381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.029 [2024-11-20 06:40:25.822502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.029 [2024-11-20 06:40:25.822528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.029 [2024-11-20 06:40:25.822542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.029 [2024-11-20 06:40:25.822554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.029 [2024-11-20 06:40:25.822584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-11-20 06:40:25.832445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.029 [2024-11-20 06:40:25.832533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.029 [2024-11-20 06:40:25.832558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.029 [2024-11-20 06:40:25.832572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.029 [2024-11-20 06:40:25.832584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.029 [2024-11-20 06:40:25.832614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-11-20 06:40:25.842444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.029 [2024-11-20 06:40:25.842536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.029 [2024-11-20 06:40:25.842561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.029 [2024-11-20 06:40:25.842575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.029 [2024-11-20 06:40:25.842588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.029 [2024-11-20 06:40:25.842617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.029 [2024-11-20 06:40:25.852464] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.029 [2024-11-20 06:40:25.852550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.029 [2024-11-20 06:40:25.852580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.029 [2024-11-20 06:40:25.852595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.029 [2024-11-20 06:40:25.852608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.029 [2024-11-20 06:40:25.852638] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.029 qpair failed and we were unable to recover it. 00:29:54.287 [2024-11-20 06:40:25.862492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.287 [2024-11-20 06:40:25.862582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.287 [2024-11-20 06:40:25.862607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.287 [2024-11-20 06:40:25.862621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.287 [2024-11-20 06:40:25.862634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.287 [2024-11-20 06:40:25.862664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-11-20 06:40:25.872552] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.287 [2024-11-20 06:40:25.872642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.287 [2024-11-20 06:40:25.872667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.287 [2024-11-20 06:40:25.872681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.287 [2024-11-20 06:40:25.872693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.287 [2024-11-20 06:40:25.872724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-11-20 06:40:25.882573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.287 [2024-11-20 06:40:25.882657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.287 [2024-11-20 06:40:25.882682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.287 [2024-11-20 06:40:25.882696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.287 [2024-11-20 06:40:25.882708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.287 [2024-11-20 06:40:25.882739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-11-20 06:40:25.892606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.287 [2024-11-20 06:40:25.892691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.287 [2024-11-20 06:40:25.892717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.287 [2024-11-20 06:40:25.892738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.287 [2024-11-20 06:40:25.892752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.287 [2024-11-20 06:40:25.892784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.287 qpair failed and we were unable to recover it. 00:29:54.287 [2024-11-20 06:40:25.902607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.287 [2024-11-20 06:40:25.902710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.287 [2024-11-20 06:40:25.902736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:25.902750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:25.902763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:25.902793] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:25.912703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:25.912823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:25.912849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:25.912863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:25.912876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:25.912905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:25.922673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:25.922773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:25.922802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:25.922817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:25.922830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:25.922861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:25.932689] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:25.932773] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:25.932799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:25.932813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:25.932825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:25.932862] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:25.942727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:25.942807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:25.942832] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:25.942846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:25.942859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:25.942901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:25.952862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:25.952952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:25.952978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:25.952992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:25.953004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:25.953035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:25.962829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:25.962911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:25.962937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:25.962951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:25.962964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:25.962993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:25.972864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:25.972977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:25.973002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:25.973016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:25.973029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:25.973059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:25.982866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:25.982953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:25.982978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:25.982993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:25.983005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:25.983034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:25.992905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:25.992998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:25.993025] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:25.993039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:25.993053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:25.993085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:26.002966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:26.003083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:26.003109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:26.003123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:26.003136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:26.003166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:26.012991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:26.013106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:26.013133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:26.013147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:26.013161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:26.013192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:26.022977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:26.023073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:26.023099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.288 [2024-11-20 06:40:26.023119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.288 [2024-11-20 06:40:26.023133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.288 [2024-11-20 06:40:26.023163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.288 qpair failed and we were unable to recover it. 00:29:54.288 [2024-11-20 06:40:26.033065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.288 [2024-11-20 06:40:26.033159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.288 [2024-11-20 06:40:26.033185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.289 [2024-11-20 06:40:26.033199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.289 [2024-11-20 06:40:26.033212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.289 [2024-11-20 06:40:26.033242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-11-20 06:40:26.043050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.289 [2024-11-20 06:40:26.043170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.289 [2024-11-20 06:40:26.043195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.289 [2024-11-20 06:40:26.043209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.289 [2024-11-20 06:40:26.043222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.289 [2024-11-20 06:40:26.043254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-11-20 06:40:26.053072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.289 [2024-11-20 06:40:26.053192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.289 [2024-11-20 06:40:26.053218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.289 [2024-11-20 06:40:26.053232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.289 [2024-11-20 06:40:26.053245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.289 [2024-11-20 06:40:26.053288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-11-20 06:40:26.063074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.289 [2024-11-20 06:40:26.063162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.289 [2024-11-20 06:40:26.063187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.289 [2024-11-20 06:40:26.063201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.289 [2024-11-20 06:40:26.063214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.289 [2024-11-20 06:40:26.063251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-11-20 06:40:26.073157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.289 [2024-11-20 06:40:26.073246] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.289 [2024-11-20 06:40:26.073272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.289 [2024-11-20 06:40:26.073286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.289 [2024-11-20 06:40:26.073300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.289 [2024-11-20 06:40:26.073339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-11-20 06:40:26.083158] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.289 [2024-11-20 06:40:26.083244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.289 [2024-11-20 06:40:26.083270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.289 [2024-11-20 06:40:26.083284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.289 [2024-11-20 06:40:26.083297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.289 [2024-11-20 06:40:26.083336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-11-20 06:40:26.093231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.289 [2024-11-20 06:40:26.093365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.289 [2024-11-20 06:40:26.093391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.289 [2024-11-20 06:40:26.093405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.289 [2024-11-20 06:40:26.093417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.289 [2024-11-20 06:40:26.093447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-11-20 06:40:26.103202] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.289 [2024-11-20 06:40:26.103289] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.289 [2024-11-20 06:40:26.103328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.289 [2024-11-20 06:40:26.103344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.289 [2024-11-20 06:40:26.103356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.289 [2024-11-20 06:40:26.103386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.289 [2024-11-20 06:40:26.113250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.289 [2024-11-20 06:40:26.113358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.289 [2024-11-20 06:40:26.113384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.289 [2024-11-20 06:40:26.113398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.289 [2024-11-20 06:40:26.113412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.289 [2024-11-20 06:40:26.113443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.289 qpair failed and we were unable to recover it. 00:29:54.548 [2024-11-20 06:40:26.123279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.548 [2024-11-20 06:40:26.123374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.548 [2024-11-20 06:40:26.123400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.548 [2024-11-20 06:40:26.123414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.548 [2024-11-20 06:40:26.123426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.548 [2024-11-20 06:40:26.123458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-11-20 06:40:26.133283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.548 [2024-11-20 06:40:26.133382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.548 [2024-11-20 06:40:26.133408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.548 [2024-11-20 06:40:26.133422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.548 [2024-11-20 06:40:26.133435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.548 [2024-11-20 06:40:26.133464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.548 qpair failed and we were unable to recover it. 00:29:54.548 [2024-11-20 06:40:26.143316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.549 [2024-11-20 06:40:26.143442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.549 [2024-11-20 06:40:26.143470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.549 [2024-11-20 06:40:26.143484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.549 [2024-11-20 06:40:26.143495] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.549 [2024-11-20 06:40:26.143525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-11-20 06:40:26.153438] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.549 [2024-11-20 06:40:26.153569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.549 [2024-11-20 06:40:26.153600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.549 [2024-11-20 06:40:26.153615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.549 [2024-11-20 06:40:26.153628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.549 [2024-11-20 06:40:26.153660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-11-20 06:40:26.163369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.549 [2024-11-20 06:40:26.163458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.549 [2024-11-20 06:40:26.163484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.549 [2024-11-20 06:40:26.163498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.549 [2024-11-20 06:40:26.163511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.549 [2024-11-20 06:40:26.163541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-11-20 06:40:26.173403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.549 [2024-11-20 06:40:26.173522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.549 [2024-11-20 06:40:26.173551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.549 [2024-11-20 06:40:26.173565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.549 [2024-11-20 06:40:26.173577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.549 [2024-11-20 06:40:26.173607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-11-20 06:40:26.183439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.549 [2024-11-20 06:40:26.183531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.549 [2024-11-20 06:40:26.183556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.549 [2024-11-20 06:40:26.183570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.549 [2024-11-20 06:40:26.183583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.549 [2024-11-20 06:40:26.183614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-11-20 06:40:26.193481] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.549 [2024-11-20 06:40:26.193570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.549 [2024-11-20 06:40:26.193596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.549 [2024-11-20 06:40:26.193610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.549 [2024-11-20 06:40:26.193629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.549 [2024-11-20 06:40:26.193660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-11-20 06:40:26.203502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.549 [2024-11-20 06:40:26.203590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.549 [2024-11-20 06:40:26.203616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.549 [2024-11-20 06:40:26.203630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.549 [2024-11-20 06:40:26.203642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.549 [2024-11-20 06:40:26.203674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-11-20 06:40:26.213511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.549 [2024-11-20 06:40:26.213606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.549 [2024-11-20 06:40:26.213632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.549 [2024-11-20 06:40:26.213647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.549 [2024-11-20 06:40:26.213659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.549 [2024-11-20 06:40:26.213690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.549 qpair failed and we were unable to recover it. 00:29:54.549 [2024-11-20 06:40:26.223545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.549 [2024-11-20 06:40:26.223630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.549 [2024-11-20 06:40:26.223656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.549 [2024-11-20 06:40:26.223670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.549 [2024-11-20 06:40:26.223682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.550 [2024-11-20 06:40:26.223714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-11-20 06:40:26.233599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.550 [2024-11-20 06:40:26.233690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.550 [2024-11-20 06:40:26.233716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.550 [2024-11-20 06:40:26.233730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.550 [2024-11-20 06:40:26.233743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.550 [2024-11-20 06:40:26.233773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-11-20 06:40:26.243645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.550 [2024-11-20 06:40:26.243762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.550 [2024-11-20 06:40:26.243788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.550 [2024-11-20 06:40:26.243802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.550 [2024-11-20 06:40:26.243814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.550 [2024-11-20 06:40:26.243857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-11-20 06:40:26.253630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.550 [2024-11-20 06:40:26.253710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.550 [2024-11-20 06:40:26.253736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.550 [2024-11-20 06:40:26.253750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.550 [2024-11-20 06:40:26.253765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.550 [2024-11-20 06:40:26.253795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-11-20 06:40:26.263652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.550 [2024-11-20 06:40:26.263740] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.550 [2024-11-20 06:40:26.263766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.550 [2024-11-20 06:40:26.263780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.550 [2024-11-20 06:40:26.263792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.550 [2024-11-20 06:40:26.263822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-11-20 06:40:26.273748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.550 [2024-11-20 06:40:26.273841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.550 [2024-11-20 06:40:26.273867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.550 [2024-11-20 06:40:26.273881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.550 [2024-11-20 06:40:26.273894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.550 [2024-11-20 06:40:26.273925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-11-20 06:40:26.283747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.550 [2024-11-20 06:40:26.283833] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.550 [2024-11-20 06:40:26.283864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.550 [2024-11-20 06:40:26.283879] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.550 [2024-11-20 06:40:26.283892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.550 [2024-11-20 06:40:26.283922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-11-20 06:40:26.293738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.550 [2024-11-20 06:40:26.293818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.550 [2024-11-20 06:40:26.293843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.550 [2024-11-20 06:40:26.293857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.550 [2024-11-20 06:40:26.293870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.550 [2024-11-20 06:40:26.293901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-11-20 06:40:26.303893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.550 [2024-11-20 06:40:26.303975] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.550 [2024-11-20 06:40:26.304001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.550 [2024-11-20 06:40:26.304015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.550 [2024-11-20 06:40:26.304028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.550 [2024-11-20 06:40:26.304057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.550 qpair failed and we were unable to recover it. 00:29:54.550 [2024-11-20 06:40:26.313845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.550 [2024-11-20 06:40:26.313940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.550 [2024-11-20 06:40:26.313969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.551 [2024-11-20 06:40:26.313986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.551 [2024-11-20 06:40:26.313999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.551 [2024-11-20 06:40:26.314029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-11-20 06:40:26.323856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.551 [2024-11-20 06:40:26.323937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.551 [2024-11-20 06:40:26.323963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.551 [2024-11-20 06:40:26.323977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.551 [2024-11-20 06:40:26.323998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.551 [2024-11-20 06:40:26.324030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-11-20 06:40:26.333888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.551 [2024-11-20 06:40:26.333972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.551 [2024-11-20 06:40:26.334001] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.551 [2024-11-20 06:40:26.334018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.551 [2024-11-20 06:40:26.334031] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.551 [2024-11-20 06:40:26.334061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-11-20 06:40:26.344031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.551 [2024-11-20 06:40:26.344115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.551 [2024-11-20 06:40:26.344142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.551 [2024-11-20 06:40:26.344157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.551 [2024-11-20 06:40:26.344169] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.551 [2024-11-20 06:40:26.344199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-11-20 06:40:26.353957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.551 [2024-11-20 06:40:26.354085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.551 [2024-11-20 06:40:26.354110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.551 [2024-11-20 06:40:26.354125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.551 [2024-11-20 06:40:26.354138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.551 [2024-11-20 06:40:26.354168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-11-20 06:40:26.363951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.551 [2024-11-20 06:40:26.364037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.551 [2024-11-20 06:40:26.364063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.551 [2024-11-20 06:40:26.364077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.551 [2024-11-20 06:40:26.364090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.551 [2024-11-20 06:40:26.364123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.551 [2024-11-20 06:40:26.373959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.551 [2024-11-20 06:40:26.374047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.551 [2024-11-20 06:40:26.374073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.551 [2024-11-20 06:40:26.374087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.551 [2024-11-20 06:40:26.374100] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.551 [2024-11-20 06:40:26.374131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.551 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-20 06:40:26.384006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.811 [2024-11-20 06:40:26.384089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.811 [2024-11-20 06:40:26.384114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.811 [2024-11-20 06:40:26.384128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.811 [2024-11-20 06:40:26.384141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.811 [2024-11-20 06:40:26.384171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-20 06:40:26.394031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.811 [2024-11-20 06:40:26.394120] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.811 [2024-11-20 06:40:26.394145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.811 [2024-11-20 06:40:26.394160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.811 [2024-11-20 06:40:26.394172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.811 [2024-11-20 06:40:26.394202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-20 06:40:26.404055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.811 [2024-11-20 06:40:26.404140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.811 [2024-11-20 06:40:26.404166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.811 [2024-11-20 06:40:26.404179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.811 [2024-11-20 06:40:26.404191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.811 [2024-11-20 06:40:26.404222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-20 06:40:26.414125] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.811 [2024-11-20 06:40:26.414245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.811 [2024-11-20 06:40:26.414272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.811 [2024-11-20 06:40:26.414286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.811 [2024-11-20 06:40:26.414299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.811 [2024-11-20 06:40:26.414340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-20 06:40:26.424114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.811 [2024-11-20 06:40:26.424199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.811 [2024-11-20 06:40:26.424225] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.811 [2024-11-20 06:40:26.424239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.811 [2024-11-20 06:40:26.424252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.811 [2024-11-20 06:40:26.424281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-20 06:40:26.434145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.811 [2024-11-20 06:40:26.434233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.811 [2024-11-20 06:40:26.434259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.811 [2024-11-20 06:40:26.434273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.811 [2024-11-20 06:40:26.434286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.811 [2024-11-20 06:40:26.434324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-20 06:40:26.444168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.811 [2024-11-20 06:40:26.444254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.811 [2024-11-20 06:40:26.444279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.811 [2024-11-20 06:40:26.444292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.811 [2024-11-20 06:40:26.444312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.811 [2024-11-20 06:40:26.444344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-20 06:40:26.454300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.811 [2024-11-20 06:40:26.454430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.811 [2024-11-20 06:40:26.454455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.811 [2024-11-20 06:40:26.454476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.811 [2024-11-20 06:40:26.454490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.811 [2024-11-20 06:40:26.454520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.811 [2024-11-20 06:40:26.464233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.811 [2024-11-20 06:40:26.464337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.811 [2024-11-20 06:40:26.464362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.811 [2024-11-20 06:40:26.464376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.811 [2024-11-20 06:40:26.464388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.811 [2024-11-20 06:40:26.464420] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.811 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.474328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.474437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.474462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.474476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.474489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.474518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.484318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.484408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.484434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.484449] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.484462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.484493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.494358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.494447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.494473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.494487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.494499] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.494536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.504359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.504469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.504494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.504509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.504521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.504551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.514411] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.514540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.514569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.514585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.514599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.514629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.524428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.524521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.524548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.524562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.524575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.524605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.534448] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.534535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.534561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.534576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.534588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.534619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.544505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.544635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.544661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.544675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.544688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.544719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.554526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.554617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.554643] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.554657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.554670] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.554700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.564615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.564711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.564737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.564751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.564763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.564794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.574548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.574636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.574661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.574675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.574688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.574717] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.584612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.584741] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.584770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.584793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.584806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.584837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.812 [2024-11-20 06:40:26.594658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.812 [2024-11-20 06:40:26.594748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.812 [2024-11-20 06:40:26.594774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.812 [2024-11-20 06:40:26.594788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.812 [2024-11-20 06:40:26.594801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.812 [2024-11-20 06:40:26.594832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.812 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-20 06:40:26.604667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.813 [2024-11-20 06:40:26.604757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.813 [2024-11-20 06:40:26.604783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.813 [2024-11-20 06:40:26.604797] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.813 [2024-11-20 06:40:26.604811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.813 [2024-11-20 06:40:26.604842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-20 06:40:26.614669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.813 [2024-11-20 06:40:26.614754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.813 [2024-11-20 06:40:26.614780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.813 [2024-11-20 06:40:26.614795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.813 [2024-11-20 06:40:26.614808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.813 [2024-11-20 06:40:26.614839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-20 06:40:26.624670] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.813 [2024-11-20 06:40:26.624762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.813 [2024-11-20 06:40:26.624787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.813 [2024-11-20 06:40:26.624801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.813 [2024-11-20 06:40:26.624814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.813 [2024-11-20 06:40:26.624850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.813 qpair failed and we were unable to recover it. 00:29:54.813 [2024-11-20 06:40:26.634776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:54.813 [2024-11-20 06:40:26.634876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:54.813 [2024-11-20 06:40:26.634902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:54.813 [2024-11-20 06:40:26.634916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:54.813 [2024-11-20 06:40:26.634929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:54.813 [2024-11-20 06:40:26.634959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:54.813 qpair failed and we were unable to recover it. 00:29:55.073 [2024-11-20 06:40:26.644733] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.073 [2024-11-20 06:40:26.644832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.073 [2024-11-20 06:40:26.644857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.073 [2024-11-20 06:40:26.644871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.073 [2024-11-20 06:40:26.644884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.073 [2024-11-20 06:40:26.644913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.073 qpair failed and we were unable to recover it. 00:29:55.073 [2024-11-20 06:40:26.654782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.654902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.654927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.654941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.654955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.654986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.664836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.664957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.664983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.664997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.665010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.665040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.674850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.674939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.674965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.674979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.674992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.675021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.684864] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.684986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.685012] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.685025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.685038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.685069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.694923] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.695052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.695081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.695096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.695109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.695140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.704938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.705025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.705051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.705065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.705078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.705108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.714946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.715042] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.715073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.715088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.715101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.715131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.725053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.725144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.725169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.725184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.725196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.725227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.734999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.735088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.735113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.735127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.735140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.735169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.745048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.745135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.745161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.745175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.745187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.745218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.755051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.755141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.755167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.755181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.755199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.755230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.765113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.765244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.765269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.765284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.765296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.765336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.775120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.074 [2024-11-20 06:40:26.775205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.074 [2024-11-20 06:40:26.775230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.074 [2024-11-20 06:40:26.775244] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.074 [2024-11-20 06:40:26.775257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.074 [2024-11-20 06:40:26.775287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.074 qpair failed and we were unable to recover it. 00:29:55.074 [2024-11-20 06:40:26.785162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.785286] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.785320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.785335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.785348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.785378] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.795204] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.795298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.795331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.795352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.795366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.795397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.805207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.805288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.805321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.805336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.805349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.805382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.815229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.815322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.815348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.815362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.815375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.815406] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.825266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.825395] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.825420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.825434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.825447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.825478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.835283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.835381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.835406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.835420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.835432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.835464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.845331] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.845421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.845452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.845467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.845480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.845510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.855337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.855427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.855452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.855466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.855479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.855510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.865419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.865522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.865550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.865573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.865587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.865619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.875518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.875607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.875634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.875647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.875662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.875693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.885431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.885518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.885544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.885557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.885576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.885607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.895495] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.895621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.895650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.895665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.895677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.895707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.075 [2024-11-20 06:40:26.905517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.075 [2024-11-20 06:40:26.905639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.075 [2024-11-20 06:40:26.905666] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.075 [2024-11-20 06:40:26.905679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.075 [2024-11-20 06:40:26.905691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.075 [2024-11-20 06:40:26.905732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.075 qpair failed and we were unable to recover it. 00:29:55.335 [2024-11-20 06:40:26.915529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.335 [2024-11-20 06:40:26.915623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.335 [2024-11-20 06:40:26.915648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.335 [2024-11-20 06:40:26.915662] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.335 [2024-11-20 06:40:26.915675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.335 [2024-11-20 06:40:26.915707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.335 qpair failed and we were unable to recover it. 00:29:55.335 [2024-11-20 06:40:26.925553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.335 [2024-11-20 06:40:26.925642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.335 [2024-11-20 06:40:26.925668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.335 [2024-11-20 06:40:26.925682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:26.925694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:26.925725] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:26.935574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:26.935652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:26.935677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:26.935692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:26.935704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:26.935734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:26.945592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:26.945679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:26.945705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:26.945719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:26.945732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:26.945761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:26.955641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:26.955732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:26.955757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:26.955770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:26.955784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:26.955815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:26.965654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:26.965744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:26.965769] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:26.965782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:26.965795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:26.965826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:26.975675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:26.975760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:26.975786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:26.975800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:26.975812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:26.975842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:26.985700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:26.985815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:26.985841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:26.985855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:26.985868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:26.985899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:26.995755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:26.995845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:26.995870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:26.995884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:26.995897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:26.995926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:27.005800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:27.005888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:27.005917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:27.005934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:27.005946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:27.005977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:27.015797] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:27.015880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:27.015906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:27.015926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:27.015940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:27.015970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:27.025832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:27.025913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:27.025938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:27.025951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:27.025963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:27.025994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:27.035895] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:27.036004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:27.036030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:27.036044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:27.036057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:27.036087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:27.045925] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:27.046052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:27.046078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:27.046092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.336 [2024-11-20 06:40:27.046104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.336 [2024-11-20 06:40:27.046133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.336 qpair failed and we were unable to recover it. 00:29:55.336 [2024-11-20 06:40:27.055906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.336 [2024-11-20 06:40:27.055991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.336 [2024-11-20 06:40:27.056017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.336 [2024-11-20 06:40:27.056031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.056043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.056079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.065968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.066068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.066093] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.066107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.066120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.066149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.075970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.076084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.076110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.076124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.076136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.076167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.086025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.086133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.086159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.086172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.086185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.086216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.096016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.096102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.096127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.096141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.096153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.096182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.106066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.106185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.106210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.106224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.106237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.106268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.116097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.116189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.116215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.116229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.116242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.116271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.126198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.126287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.126323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.126338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.126351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.126382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.136162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.136290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.136327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.136342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.136355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.136387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.146173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.146259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.146284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.146314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.146329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.146359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.156232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.156342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.156371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.156387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.156400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.156431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.337 [2024-11-20 06:40:27.166260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.337 [2024-11-20 06:40:27.166406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.337 [2024-11-20 06:40:27.166432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.337 [2024-11-20 06:40:27.166446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.337 [2024-11-20 06:40:27.166459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.337 [2024-11-20 06:40:27.166493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.337 qpair failed and we were unable to recover it. 00:29:55.597 [2024-11-20 06:40:27.176266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.597 [2024-11-20 06:40:27.176394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.597 [2024-11-20 06:40:27.176420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.597 [2024-11-20 06:40:27.176434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.597 [2024-11-20 06:40:27.176447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.597 [2024-11-20 06:40:27.176477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.597 qpair failed and we were unable to recover it. 00:29:55.597 [2024-11-20 06:40:27.186285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.597 [2024-11-20 06:40:27.186378] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.186404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.186418] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.186431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.186467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.196359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.196447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.196474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.196488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.196500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.196530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.206349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.206435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.206460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.206475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.206487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.206517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.216401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.216490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.216516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.216529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.216542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.216572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.226397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.226481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.226507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.226520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.226533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.226565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.236463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.236587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.236613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.236627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.236639] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.236669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.246479] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.246569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.246594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.246607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.246620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.246650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.256534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.256657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.256682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.256696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.256709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.256739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.266512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.266603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.266628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.266642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.266655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.266684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.276619] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.276727] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.276757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.276773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.276785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.276815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.286628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.286737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.286763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.286777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.286790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.286821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.296609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.296695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.296720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.296734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.296747] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.296777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.306632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.598 [2024-11-20 06:40:27.306711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.598 [2024-11-20 06:40:27.306736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.598 [2024-11-20 06:40:27.306750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.598 [2024-11-20 06:40:27.306762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.598 [2024-11-20 06:40:27.306792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.598 qpair failed and we were unable to recover it. 00:29:55.598 [2024-11-20 06:40:27.316770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.316858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.316883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.316897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.316916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.316947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.326772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.326859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.326886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.326900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.326915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.326945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.336750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.336834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.336860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.336874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.336886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.336916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.346776] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.346861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.346886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.346900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.346913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.346942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.356844] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.356932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.356957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.356971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.356984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.357014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.366862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.366985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.367011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.367025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.367037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.367068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.376841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.376926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.376951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.376965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.376977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.377007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.386869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.386973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.386999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.387013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.387027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.387058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.397007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.397098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.397124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.397137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.397150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.397181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.406932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.407018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.407049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.407065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.407077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.407108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.416943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.417024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.417049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.417062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.417075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.417105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.599 [2024-11-20 06:40:27.426977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.599 [2024-11-20 06:40:27.427059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.599 [2024-11-20 06:40:27.427083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.599 [2024-11-20 06:40:27.427097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.599 [2024-11-20 06:40:27.427109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.599 [2024-11-20 06:40:27.427140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.599 qpair failed and we were unable to recover it. 00:29:55.860 [2024-11-20 06:40:27.437054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.860 [2024-11-20 06:40:27.437158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.860 [2024-11-20 06:40:27.437184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.860 [2024-11-20 06:40:27.437198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.860 [2024-11-20 06:40:27.437211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.860 [2024-11-20 06:40:27.437240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.860 qpair failed and we were unable to recover it. 00:29:55.860 [2024-11-20 06:40:27.447054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.860 [2024-11-20 06:40:27.447156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.860 [2024-11-20 06:40:27.447185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.860 [2024-11-20 06:40:27.447199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.860 [2024-11-20 06:40:27.447218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.860 [2024-11-20 06:40:27.447248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.860 qpair failed and we were unable to recover it. 00:29:55.860 [2024-11-20 06:40:27.457073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.860 [2024-11-20 06:40:27.457158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.860 [2024-11-20 06:40:27.457188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.860 [2024-11-20 06:40:27.457207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.457220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.457251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.467093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.467201] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.467227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.467242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.467254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.467296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.477154] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.477278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.477312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.477333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.477346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.477376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.487159] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.487248] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.487274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.487288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.487301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.487345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.497270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.497363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.497389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.497403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.497415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.497446] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.507225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.507316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.507342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.507356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.507370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.507402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.517301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.517409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.517434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.517448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.517461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.517491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.527328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.527448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.527474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.527488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.527501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.527531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.537297] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.537399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.537425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.537439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.537451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.537482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.547364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.547447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.547472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.547485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.547498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.547527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.557384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.557470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.557496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.557510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.557522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.557551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.567397] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.567481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.567507] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.567521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.567533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.567564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.577423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.577555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.577580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.577601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.861 [2024-11-20 06:40:27.577615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.861 [2024-11-20 06:40:27.577644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.861 qpair failed and we were unable to recover it. 00:29:55.861 [2024-11-20 06:40:27.587449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.861 [2024-11-20 06:40:27.587551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.861 [2024-11-20 06:40:27.587577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.861 [2024-11-20 06:40:27.587590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.587603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.587645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:55.862 [2024-11-20 06:40:27.597504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.862 [2024-11-20 06:40:27.597594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.862 [2024-11-20 06:40:27.597619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.862 [2024-11-20 06:40:27.597633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.597646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.597677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:55.862 [2024-11-20 06:40:27.607498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.862 [2024-11-20 06:40:27.607593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.862 [2024-11-20 06:40:27.607618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.862 [2024-11-20 06:40:27.607632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.607646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.607677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:55.862 [2024-11-20 06:40:27.617527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.862 [2024-11-20 06:40:27.617652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.862 [2024-11-20 06:40:27.617678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.862 [2024-11-20 06:40:27.617691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.617704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.617740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:55.862 [2024-11-20 06:40:27.627553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.862 [2024-11-20 06:40:27.627634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.862 [2024-11-20 06:40:27.627660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.862 [2024-11-20 06:40:27.627674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.627686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.627728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:55.862 [2024-11-20 06:40:27.637646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.862 [2024-11-20 06:40:27.637735] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.862 [2024-11-20 06:40:27.637760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.862 [2024-11-20 06:40:27.637774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.637787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.637816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:55.862 [2024-11-20 06:40:27.647641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.862 [2024-11-20 06:40:27.647734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.862 [2024-11-20 06:40:27.647761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.862 [2024-11-20 06:40:27.647774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.647787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.647818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:55.862 [2024-11-20 06:40:27.657640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.862 [2024-11-20 06:40:27.657721] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.862 [2024-11-20 06:40:27.657747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.862 [2024-11-20 06:40:27.657761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.657774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.657805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:55.862 [2024-11-20 06:40:27.667687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.862 [2024-11-20 06:40:27.667834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.862 [2024-11-20 06:40:27.667861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.862 [2024-11-20 06:40:27.667874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.667887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.667916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:55.862 [2024-11-20 06:40:27.677708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.862 [2024-11-20 06:40:27.677800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.862 [2024-11-20 06:40:27.677825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.862 [2024-11-20 06:40:27.677839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.677851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.677881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:55.862 [2024-11-20 06:40:27.687748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:55.862 [2024-11-20 06:40:27.687836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:55.862 [2024-11-20 06:40:27.687862] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:55.862 [2024-11-20 06:40:27.687876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:55.862 [2024-11-20 06:40:27.687887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:55.862 [2024-11-20 06:40:27.687918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:55.862 qpair failed and we were unable to recover it. 00:29:56.123 [2024-11-20 06:40:27.697793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.123 [2024-11-20 06:40:27.697890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.123 [2024-11-20 06:40:27.697916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.123 [2024-11-20 06:40:27.697929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.123 [2024-11-20 06:40:27.697942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.123 [2024-11-20 06:40:27.697972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-11-20 06:40:27.707834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.123 [2024-11-20 06:40:27.707922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.123 [2024-11-20 06:40:27.707954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.123 [2024-11-20 06:40:27.707970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.123 [2024-11-20 06:40:27.707982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.123 [2024-11-20 06:40:27.708012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-11-20 06:40:27.717875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.123 [2024-11-20 06:40:27.717966] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.123 [2024-11-20 06:40:27.717991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.123 [2024-11-20 06:40:27.718006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.123 [2024-11-20 06:40:27.718019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.123 [2024-11-20 06:40:27.718050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-11-20 06:40:27.727891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.123 [2024-11-20 06:40:27.728021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.123 [2024-11-20 06:40:27.728048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.123 [2024-11-20 06:40:27.728061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.123 [2024-11-20 06:40:27.728075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.123 [2024-11-20 06:40:27.728104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-11-20 06:40:27.737912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.123 [2024-11-20 06:40:27.738002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.123 [2024-11-20 06:40:27.738027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.123 [2024-11-20 06:40:27.738041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.123 [2024-11-20 06:40:27.738053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.123 [2024-11-20 06:40:27.738083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-11-20 06:40:27.747904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.123 [2024-11-20 06:40:27.747995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.123 [2024-11-20 06:40:27.748020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.123 [2024-11-20 06:40:27.748034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.123 [2024-11-20 06:40:27.748046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.123 [2024-11-20 06:40:27.748082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-11-20 06:40:27.757977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.123 [2024-11-20 06:40:27.758071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.123 [2024-11-20 06:40:27.758097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.123 [2024-11-20 06:40:27.758110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.123 [2024-11-20 06:40:27.758123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.123 [2024-11-20 06:40:27.758154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-11-20 06:40:27.767965] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.123 [2024-11-20 06:40:27.768051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.123 [2024-11-20 06:40:27.768076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.123 [2024-11-20 06:40:27.768090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.123 [2024-11-20 06:40:27.768102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.123 [2024-11-20 06:40:27.768132] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.123 qpair failed and we were unable to recover it. 00:29:56.123 [2024-11-20 06:40:27.778004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.123 [2024-11-20 06:40:27.778086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.123 [2024-11-20 06:40:27.778112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.123 [2024-11-20 06:40:27.778126] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.778140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.778170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.788004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.788091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.788116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.788130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.788143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.788172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.798041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.798134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.798158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.798172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.798186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.798215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.808089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.808179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.808205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.808218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.808231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.808262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.818103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.818188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.818214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.818228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.818240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.818272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.828118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.828195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.828221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.828235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.828247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.828290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.838193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.838283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.838326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.838341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.838355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.838385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.848177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.848327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.848354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.848368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.848381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.848413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.858250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.858379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.858408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.858422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.858435] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.858465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.868244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.868340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.868367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.868380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.868394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.868425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.878343] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.878454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.878480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.878493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.878514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.878548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.888333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.888451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.888476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.888490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.888503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.888533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.898341] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.898428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.898455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.124 [2024-11-20 06:40:27.898469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.124 [2024-11-20 06:40:27.898481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.124 [2024-11-20 06:40:27.898514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.124 qpair failed and we were unable to recover it. 00:29:56.124 [2024-11-20 06:40:27.908371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.124 [2024-11-20 06:40:27.908455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.124 [2024-11-20 06:40:27.908484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.125 [2024-11-20 06:40:27.908500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.125 [2024-11-20 06:40:27.908513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.125 [2024-11-20 06:40:27.908543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-11-20 06:40:27.918421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.125 [2024-11-20 06:40:27.918513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.125 [2024-11-20 06:40:27.918539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.125 [2024-11-20 06:40:27.918553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.125 [2024-11-20 06:40:27.918565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.125 [2024-11-20 06:40:27.918596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-11-20 06:40:27.928415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.125 [2024-11-20 06:40:27.928501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.125 [2024-11-20 06:40:27.928527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.125 [2024-11-20 06:40:27.928541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.125 [2024-11-20 06:40:27.928553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.125 [2024-11-20 06:40:27.928585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-11-20 06:40:27.938452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.125 [2024-11-20 06:40:27.938559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.125 [2024-11-20 06:40:27.938585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.125 [2024-11-20 06:40:27.938599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.125 [2024-11-20 06:40:27.938612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.125 [2024-11-20 06:40:27.938641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.125 [2024-11-20 06:40:27.948471] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.125 [2024-11-20 06:40:27.948591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.125 [2024-11-20 06:40:27.948616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.125 [2024-11-20 06:40:27.948629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.125 [2024-11-20 06:40:27.948642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.125 [2024-11-20 06:40:27.948672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.125 qpair failed and we were unable to recover it. 00:29:56.385 [2024-11-20 06:40:27.958570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.385 [2024-11-20 06:40:27.958685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.385 [2024-11-20 06:40:27.958714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.385 [2024-11-20 06:40:27.958728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.385 [2024-11-20 06:40:27.958741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.385 [2024-11-20 06:40:27.958770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-11-20 06:40:27.968537] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.385 [2024-11-20 06:40:27.968622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.385 [2024-11-20 06:40:27.968654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.385 [2024-11-20 06:40:27.968669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.385 [2024-11-20 06:40:27.968681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.385 [2024-11-20 06:40:27.968710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.385 qpair failed and we were unable to recover it. 00:29:56.385 [2024-11-20 06:40:27.978687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.385 [2024-11-20 06:40:27.978772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.385 [2024-11-20 06:40:27.978798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.385 [2024-11-20 06:40:27.978812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.385 [2024-11-20 06:40:27.978824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:27.978854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:27.988645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:27.988765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:27.988790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:27.988804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:27.988816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:27.988846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:27.998667] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:27.998757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:27.998782] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:27.998796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:27.998808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:27.998841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.008661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.008746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.008772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.008793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.008807] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:28.008837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.018674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.018793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.018818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.018832] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.018844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:28.018874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.028730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.028815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.028840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.028854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.028867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:28.028896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.038778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.038904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.038929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.038943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.038955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:28.038985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.048778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.048897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.048922] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.048936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.048948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:28.048978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.058793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.058881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.058907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.058922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.058935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:28.058965] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.068804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.068934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.068960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.068973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.068986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:28.069018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.078896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.079013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.079039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.079052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.079065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:28.079095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.088913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.088999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.089023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.089037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.089049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:28.089081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.098939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.099026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.099052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.099066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.099077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.386 [2024-11-20 06:40:28.099106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.386 qpair failed and we were unable to recover it. 00:29:56.386 [2024-11-20 06:40:28.108931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.386 [2024-11-20 06:40:28.109047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.386 [2024-11-20 06:40:28.109072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.386 [2024-11-20 06:40:28.109086] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.386 [2024-11-20 06:40:28.109098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.109127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-11-20 06:40:28.118971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.387 [2024-11-20 06:40:28.119096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.387 [2024-11-20 06:40:28.119121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.387 [2024-11-20 06:40:28.119135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.387 [2024-11-20 06:40:28.119148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.119179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-11-20 06:40:28.129011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.387 [2024-11-20 06:40:28.129116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.387 [2024-11-20 06:40:28.129141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.387 [2024-11-20 06:40:28.129156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.387 [2024-11-20 06:40:28.129170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.129201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-11-20 06:40:28.139008] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.387 [2024-11-20 06:40:28.139087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.387 [2024-11-20 06:40:28.139112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.387 [2024-11-20 06:40:28.139132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.387 [2024-11-20 06:40:28.139145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.139176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-11-20 06:40:28.149043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.387 [2024-11-20 06:40:28.149141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.387 [2024-11-20 06:40:28.149165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.387 [2024-11-20 06:40:28.149178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.387 [2024-11-20 06:40:28.149190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.149232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-11-20 06:40:28.159098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.387 [2024-11-20 06:40:28.159188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.387 [2024-11-20 06:40:28.159214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.387 [2024-11-20 06:40:28.159228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.387 [2024-11-20 06:40:28.159241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.159270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-11-20 06:40:28.169136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.387 [2024-11-20 06:40:28.169261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.387 [2024-11-20 06:40:28.169287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.387 [2024-11-20 06:40:28.169306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.387 [2024-11-20 06:40:28.169321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.169352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-11-20 06:40:28.179140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.387 [2024-11-20 06:40:28.179223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.387 [2024-11-20 06:40:28.179249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.387 [2024-11-20 06:40:28.179263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.387 [2024-11-20 06:40:28.179275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.179319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-11-20 06:40:28.189187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.387 [2024-11-20 06:40:28.189317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.387 [2024-11-20 06:40:28.189343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.387 [2024-11-20 06:40:28.189356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.387 [2024-11-20 06:40:28.189371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.189402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-11-20 06:40:28.199216] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.387 [2024-11-20 06:40:28.199314] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.387 [2024-11-20 06:40:28.199340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.387 [2024-11-20 06:40:28.199353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.387 [2024-11-20 06:40:28.199367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.199399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.387 [2024-11-20 06:40:28.209239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.387 [2024-11-20 06:40:28.209331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.387 [2024-11-20 06:40:28.209357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.387 [2024-11-20 06:40:28.209371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.387 [2024-11-20 06:40:28.209383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.387 [2024-11-20 06:40:28.209414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.387 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.219282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.219414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.647 [2024-11-20 06:40:28.219439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.647 [2024-11-20 06:40:28.219453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.647 [2024-11-20 06:40:28.219465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.647 [2024-11-20 06:40:28.219496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.647 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.229281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.229382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.647 [2024-11-20 06:40:28.229411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.647 [2024-11-20 06:40:28.229427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.647 [2024-11-20 06:40:28.229439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.647 [2024-11-20 06:40:28.229470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.647 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.239353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.239444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.647 [2024-11-20 06:40:28.239470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.647 [2024-11-20 06:40:28.239484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.647 [2024-11-20 06:40:28.239497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.647 [2024-11-20 06:40:28.239526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.647 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.249357] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.249481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.647 [2024-11-20 06:40:28.249508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.647 [2024-11-20 06:40:28.249521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.647 [2024-11-20 06:40:28.249534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.647 [2024-11-20 06:40:28.249564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.647 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.259388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.259469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.647 [2024-11-20 06:40:28.259495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.647 [2024-11-20 06:40:28.259509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.647 [2024-11-20 06:40:28.259523] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.647 [2024-11-20 06:40:28.259554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.647 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.269401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.269520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.647 [2024-11-20 06:40:28.269551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.647 [2024-11-20 06:40:28.269566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.647 [2024-11-20 06:40:28.269578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.647 [2024-11-20 06:40:28.269609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.647 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.279465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.279559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.647 [2024-11-20 06:40:28.279584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.647 [2024-11-20 06:40:28.279599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.647 [2024-11-20 06:40:28.279612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.647 [2024-11-20 06:40:28.279642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.647 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.289470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.289559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.647 [2024-11-20 06:40:28.289584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.647 [2024-11-20 06:40:28.289598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.647 [2024-11-20 06:40:28.289610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.647 [2024-11-20 06:40:28.289642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.647 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.299510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.299593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.647 [2024-11-20 06:40:28.299618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.647 [2024-11-20 06:40:28.299632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.647 [2024-11-20 06:40:28.299644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.647 [2024-11-20 06:40:28.299675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.647 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.309527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.309648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.647 [2024-11-20 06:40:28.309674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.647 [2024-11-20 06:40:28.309687] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.647 [2024-11-20 06:40:28.309700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.647 [2024-11-20 06:40:28.309736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.647 qpair failed and we were unable to recover it. 00:29:56.647 [2024-11-20 06:40:28.319566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.647 [2024-11-20 06:40:28.319657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.319683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.319697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.319709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.319740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.329575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.329654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.329680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.329693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.329706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.329735] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.339593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.339675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.339700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.339714] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.339727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.339756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.349645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.349722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.349748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.349761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.349774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.349816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.359696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.359798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.359824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.359838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.359851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.359881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.369752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.369837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.369863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.369877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.369890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.369920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.379724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.379846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.379871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.379885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.379897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.379928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.389789] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.389872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.389900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.389917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.389930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.389972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.399857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.399953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.399985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.399999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.400012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.400044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.409824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.409931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.409958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.409972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.409984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.410015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.419893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.420007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.420033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.420047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.420060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.420089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.429909] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.429989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.430014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.430029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.430042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.430073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.439926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.440016] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.648 [2024-11-20 06:40:28.440042] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.648 [2024-11-20 06:40:28.440055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.648 [2024-11-20 06:40:28.440074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.648 [2024-11-20 06:40:28.440105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.648 qpair failed and we were unable to recover it. 00:29:56.648 [2024-11-20 06:40:28.449942] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.648 [2024-11-20 06:40:28.450025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.649 [2024-11-20 06:40:28.450050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.649 [2024-11-20 06:40:28.450064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.649 [2024-11-20 06:40:28.450077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.649 [2024-11-20 06:40:28.450107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.649 qpair failed and we were unable to recover it. 00:29:56.649 [2024-11-20 06:40:28.459960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.649 [2024-11-20 06:40:28.460083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.649 [2024-11-20 06:40:28.460109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.649 [2024-11-20 06:40:28.460122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.649 [2024-11-20 06:40:28.460135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.649 [2024-11-20 06:40:28.460167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.649 qpair failed and we were unable to recover it. 00:29:56.649 [2024-11-20 06:40:28.470113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.649 [2024-11-20 06:40:28.470196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.649 [2024-11-20 06:40:28.470222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.649 [2024-11-20 06:40:28.470236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.649 [2024-11-20 06:40:28.470249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.649 [2024-11-20 06:40:28.470278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.649 qpair failed and we were unable to recover it. 00:29:56.908 [2024-11-20 06:40:28.480037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.908 [2024-11-20 06:40:28.480149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.908 [2024-11-20 06:40:28.480175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.908 [2024-11-20 06:40:28.480189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.908 [2024-11-20 06:40:28.480202] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.908 [2024-11-20 06:40:28.480231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.908 qpair failed and we were unable to recover it. 00:29:56.908 [2024-11-20 06:40:28.490053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.908 [2024-11-20 06:40:28.490136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.908 [2024-11-20 06:40:28.490162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.908 [2024-11-20 06:40:28.490176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.490189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.490221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.500108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.500205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.500231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.500245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.500258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.500288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.510120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.510207] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.510232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.510246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.510259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.510289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.520149] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.520242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.520267] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.520281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.520295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.520335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.530168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.530253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.530284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.530299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.530321] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.530352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.540179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.540259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.540283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.540297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.540319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.540350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.550234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.550328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.550358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.550374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.550387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.550418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.560263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.560365] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.560391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.560405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.560418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.560448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.570280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.570374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.570400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.570420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.570433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.570465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.580294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.580400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.580426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.580440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.580453] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.580483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.590355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.590439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.590464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.590477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.590490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.590521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.600384] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.600475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.600503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.600520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.600533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.600564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.610431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.610568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.909 [2024-11-20 06:40:28.610595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.909 [2024-11-20 06:40:28.610609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.909 [2024-11-20 06:40:28.610622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.909 [2024-11-20 06:40:28.610652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.909 qpair failed and we were unable to recover it. 00:29:56.909 [2024-11-20 06:40:28.620458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.909 [2024-11-20 06:40:28.620588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.620614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.620627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.620640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.620669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.630453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.630533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.630559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.630573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.630586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.630618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.640493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.640625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.640650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.640664] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.640677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.640707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.650520] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.650635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.650664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.650680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.650693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.650724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.660558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.660653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.660679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.660693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.660706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.660736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.670602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.670688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.670714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.670728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.670740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.670771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.680605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.680697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.680722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.680736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.680749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.680779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.690646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.690737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.690763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.690777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.690790] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.690820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.700672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.700763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.700789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.700809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.700823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.700854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.710694] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.710822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.710850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.710867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.710879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.710909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.720763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.720857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.720883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.720897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.720909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.720940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.730748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.730865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.730891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.730905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.730918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.730947] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:56.910 [2024-11-20 06:40:28.740774] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:56.910 [2024-11-20 06:40:28.740862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:56.910 [2024-11-20 06:40:28.740889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:56.910 [2024-11-20 06:40:28.740903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:56.910 [2024-11-20 06:40:28.740914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:56.910 [2024-11-20 06:40:28.740950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:56.910 qpair failed and we were unable to recover it. 00:29:57.168 [2024-11-20 06:40:28.750800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.168 [2024-11-20 06:40:28.750876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.168 [2024-11-20 06:40:28.750902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.168 [2024-11-20 06:40:28.750916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.168 [2024-11-20 06:40:28.750928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.750958] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.760869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.760986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.761011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.761026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.761039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.761068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.770899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.770988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.771014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.771028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.771040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.771070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.780913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.780993] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.781019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.781033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.781045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.781075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.790931] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.791015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.791040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.791055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.791067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.791098] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.800951] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.801038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.801063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.801077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.801089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.801120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.811011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.811133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.811158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.811172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.811185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.811215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.821006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.821086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.821111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.821125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.821138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.821169] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.831046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.831168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.831202] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.831220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.831233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.831266] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.841121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.841249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.841275] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.841289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.841308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.841341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.851095] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.851191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.851216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.851230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.851243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.851272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.861165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.861291] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.861326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.861341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.861354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.861384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.871141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.871223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.871249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.871263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.169 [2024-11-20 06:40:28.871281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.169 [2024-11-20 06:40:28.871319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.169 qpair failed and we were unable to recover it. 00:29:57.169 [2024-11-20 06:40:28.881208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.169 [2024-11-20 06:40:28.881336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.169 [2024-11-20 06:40:28.881362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.169 [2024-11-20 06:40:28.881376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.881388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.881419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.891207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.891335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.891360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.891374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.891387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.891419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.901250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.901330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.901355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.901369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.901382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.901411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.911273] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.911370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.911396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.911409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.911422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.911453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.921324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.921426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.921452] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.921466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.921480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.921510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.931342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.931425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.931450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.931464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.931477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.931507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.941458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.941590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.941616] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.941630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.941643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.941673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.951382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.951472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.951497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.951512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.951524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.951555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.961450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.961540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.961571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.961585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.961600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.961630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.971501] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.971593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.971620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.971633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.971647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.971678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.981535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.981617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.981645] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.981660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.981673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.981704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:28.991517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.170 [2024-11-20 06:40:28.991612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.170 [2024-11-20 06:40:28.991638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.170 [2024-11-20 06:40:28.991652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.170 [2024-11-20 06:40:28.991665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.170 [2024-11-20 06:40:28.991694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.170 qpair failed and we were unable to recover it. 00:29:57.170 [2024-11-20 06:40:29.001567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.434 [2024-11-20 06:40:29.001656] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.434 [2024-11-20 06:40:29.001682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.434 [2024-11-20 06:40:29.001696] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.434 [2024-11-20 06:40:29.001714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.434 [2024-11-20 06:40:29.001745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-11-20 06:40:29.011579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.434 [2024-11-20 06:40:29.011696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.434 [2024-11-20 06:40:29.011722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.434 [2024-11-20 06:40:29.011736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.434 [2024-11-20 06:40:29.011749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.434 [2024-11-20 06:40:29.011778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-11-20 06:40:29.021612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.434 [2024-11-20 06:40:29.021697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.434 [2024-11-20 06:40:29.021723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.434 [2024-11-20 06:40:29.021737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.434 [2024-11-20 06:40:29.021749] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.434 [2024-11-20 06:40:29.021779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.434 qpair failed and we were unable to recover it. 00:29:57.434 [2024-11-20 06:40:29.031626] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.434 [2024-11-20 06:40:29.031705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.434 [2024-11-20 06:40:29.031731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.435 [2024-11-20 06:40:29.031745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.435 [2024-11-20 06:40:29.031757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.435 [2024-11-20 06:40:29.031787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-11-20 06:40:29.041661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.435 [2024-11-20 06:40:29.041753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.435 [2024-11-20 06:40:29.041779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.435 [2024-11-20 06:40:29.041792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.435 [2024-11-20 06:40:29.041805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.435 [2024-11-20 06:40:29.041838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-11-20 06:40:29.051669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.435 [2024-11-20 06:40:29.051754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.435 [2024-11-20 06:40:29.051780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.435 [2024-11-20 06:40:29.051793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.435 [2024-11-20 06:40:29.051806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.435 [2024-11-20 06:40:29.051837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-11-20 06:40:29.061737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.435 [2024-11-20 06:40:29.061819] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.435 [2024-11-20 06:40:29.061844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.435 [2024-11-20 06:40:29.061858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.435 [2024-11-20 06:40:29.061871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.435 [2024-11-20 06:40:29.061901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-11-20 06:40:29.071740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.435 [2024-11-20 06:40:29.071856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.435 [2024-11-20 06:40:29.071882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.435 [2024-11-20 06:40:29.071895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.435 [2024-11-20 06:40:29.071908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.435 [2024-11-20 06:40:29.071937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.435 qpair failed and we were unable to recover it. 00:29:57.435 [2024-11-20 06:40:29.081814] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.435 [2024-11-20 06:40:29.081910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.435 [2024-11-20 06:40:29.081935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.435 [2024-11-20 06:40:29.081954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.436 [2024-11-20 06:40:29.081968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.436 [2024-11-20 06:40:29.081998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 06:40:29.091802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.436 [2024-11-20 06:40:29.091914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.436 [2024-11-20 06:40:29.091945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.436 [2024-11-20 06:40:29.091960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.436 [2024-11-20 06:40:29.091973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.436 [2024-11-20 06:40:29.092003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 06:40:29.101866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.436 [2024-11-20 06:40:29.101991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.436 [2024-11-20 06:40:29.102017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.436 [2024-11-20 06:40:29.102031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.436 [2024-11-20 06:40:29.102043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.436 [2024-11-20 06:40:29.102072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 06:40:29.111888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.436 [2024-11-20 06:40:29.111977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.436 [2024-11-20 06:40:29.112003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.436 [2024-11-20 06:40:29.112017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.436 [2024-11-20 06:40:29.112029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.436 [2024-11-20 06:40:29.112060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 06:40:29.121921] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.436 [2024-11-20 06:40:29.122048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.436 [2024-11-20 06:40:29.122073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.436 [2024-11-20 06:40:29.122087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.436 [2024-11-20 06:40:29.122099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.436 [2024-11-20 06:40:29.122130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.436 qpair failed and we were unable to recover it. 00:29:57.436 [2024-11-20 06:40:29.131894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.436 [2024-11-20 06:40:29.132027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.436 [2024-11-20 06:40:29.132053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.436 [2024-11-20 06:40:29.132074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.437 [2024-11-20 06:40:29.132088] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.437 [2024-11-20 06:40:29.132118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 06:40:29.141954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.437 [2024-11-20 06:40:29.142037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.437 [2024-11-20 06:40:29.142062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.437 [2024-11-20 06:40:29.142076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.437 [2024-11-20 06:40:29.142089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.437 [2024-11-20 06:40:29.142120] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 06:40:29.151974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.437 [2024-11-20 06:40:29.152056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.437 [2024-11-20 06:40:29.152080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.437 [2024-11-20 06:40:29.152094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.437 [2024-11-20 06:40:29.152105] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.437 [2024-11-20 06:40:29.152134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 06:40:29.162013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.437 [2024-11-20 06:40:29.162128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.437 [2024-11-20 06:40:29.162153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.437 [2024-11-20 06:40:29.162167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.437 [2024-11-20 06:40:29.162180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.437 [2024-11-20 06:40:29.162209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.437 qpair failed and we were unable to recover it. 00:29:57.437 [2024-11-20 06:40:29.172027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.437 [2024-11-20 06:40:29.172121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.437 [2024-11-20 06:40:29.172146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.437 [2024-11-20 06:40:29.172161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.437 [2024-11-20 06:40:29.172173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.438 [2024-11-20 06:40:29.172203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 06:40:29.182142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.438 [2024-11-20 06:40:29.182228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.438 [2024-11-20 06:40:29.182253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.438 [2024-11-20 06:40:29.182267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.438 [2024-11-20 06:40:29.182280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.438 [2024-11-20 06:40:29.182320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 06:40:29.192085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.438 [2024-11-20 06:40:29.192171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.438 [2024-11-20 06:40:29.192196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.438 [2024-11-20 06:40:29.192210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.438 [2024-11-20 06:40:29.192222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.438 [2024-11-20 06:40:29.192252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 06:40:29.202127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.438 [2024-11-20 06:40:29.202221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.438 [2024-11-20 06:40:29.202247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.438 [2024-11-20 06:40:29.202261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.438 [2024-11-20 06:40:29.202273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.438 [2024-11-20 06:40:29.202309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 06:40:29.212139] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.438 [2024-11-20 06:40:29.212268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.438 [2024-11-20 06:40:29.212293] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.438 [2024-11-20 06:40:29.212315] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.438 [2024-11-20 06:40:29.212330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.438 [2024-11-20 06:40:29.212371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.438 qpair failed and we were unable to recover it. 00:29:57.438 [2024-11-20 06:40:29.222165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.438 [2024-11-20 06:40:29.222254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.438 [2024-11-20 06:40:29.222280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.438 [2024-11-20 06:40:29.222294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.439 [2024-11-20 06:40:29.222316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.439 [2024-11-20 06:40:29.222360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 06:40:29.232174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.439 [2024-11-20 06:40:29.232261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.439 [2024-11-20 06:40:29.232287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.439 [2024-11-20 06:40:29.232311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.439 [2024-11-20 06:40:29.232327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.439 [2024-11-20 06:40:29.232357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 06:40:29.242253] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.439 [2024-11-20 06:40:29.242374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.439 [2024-11-20 06:40:29.242399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.439 [2024-11-20 06:40:29.242413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.439 [2024-11-20 06:40:29.242426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.439 [2024-11-20 06:40:29.242456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 06:40:29.252259] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.439 [2024-11-20 06:40:29.252352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.439 [2024-11-20 06:40:29.252378] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.439 [2024-11-20 06:40:29.252392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.439 [2024-11-20 06:40:29.252406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.439 [2024-11-20 06:40:29.252437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.439 [2024-11-20 06:40:29.262284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.439 [2024-11-20 06:40:29.262381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.439 [2024-11-20 06:40:29.262408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.439 [2024-11-20 06:40:29.262427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.439 [2024-11-20 06:40:29.262441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.439 [2024-11-20 06:40:29.262473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.439 qpair failed and we were unable to recover it. 00:29:57.698 [2024-11-20 06:40:29.272290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.698 [2024-11-20 06:40:29.272386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.698 [2024-11-20 06:40:29.272413] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.698 [2024-11-20 06:40:29.272426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.698 [2024-11-20 06:40:29.272439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.698 [2024-11-20 06:40:29.272471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.698 qpair failed and we were unable to recover it. 00:29:57.698 [2024-11-20 06:40:29.282346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.698 [2024-11-20 06:40:29.282435] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.698 [2024-11-20 06:40:29.282461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.698 [2024-11-20 06:40:29.282474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.698 [2024-11-20 06:40:29.282487] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.698 [2024-11-20 06:40:29.282517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.698 qpair failed and we were unable to recover it. 00:29:57.698 [2024-11-20 06:40:29.292378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.698 [2024-11-20 06:40:29.292467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.698 [2024-11-20 06:40:29.292494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.698 [2024-11-20 06:40:29.292508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.698 [2024-11-20 06:40:29.292520] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.698 [2024-11-20 06:40:29.292552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.698 qpair failed and we were unable to recover it. 00:29:57.698 [2024-11-20 06:40:29.302404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.698 [2024-11-20 06:40:29.302499] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.698 [2024-11-20 06:40:29.302525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.698 [2024-11-20 06:40:29.302539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.698 [2024-11-20 06:40:29.302557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.698 [2024-11-20 06:40:29.302593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.698 qpair failed and we were unable to recover it. 00:29:57.698 [2024-11-20 06:40:29.312421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.698 [2024-11-20 06:40:29.312501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.698 [2024-11-20 06:40:29.312526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.698 [2024-11-20 06:40:29.312540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.698 [2024-11-20 06:40:29.312553] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.698 [2024-11-20 06:40:29.312584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.698 qpair failed and we were unable to recover it. 00:29:57.698 [2024-11-20 06:40:29.322516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.698 [2024-11-20 06:40:29.322614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.698 [2024-11-20 06:40:29.322639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.698 [2024-11-20 06:40:29.322652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.698 [2024-11-20 06:40:29.322665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.698 [2024-11-20 06:40:29.322696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.698 qpair failed and we were unable to recover it. 00:29:57.698 [2024-11-20 06:40:29.332527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.698 [2024-11-20 06:40:29.332651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.698 [2024-11-20 06:40:29.332676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.698 [2024-11-20 06:40:29.332690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.698 [2024-11-20 06:40:29.332702] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.698 [2024-11-20 06:40:29.332733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.698 qpair failed and we were unable to recover it. 00:29:57.698 [2024-11-20 06:40:29.342508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.698 [2024-11-20 06:40:29.342596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.698 [2024-11-20 06:40:29.342621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.698 [2024-11-20 06:40:29.342635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.698 [2024-11-20 06:40:29.342648] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe768000b90 00:29:57.698 [2024-11-20 06:40:29.342677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:57.698 qpair failed and we were unable to recover it. 00:29:57.698 [2024-11-20 06:40:29.352581] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.698 [2024-11-20 06:40:29.352673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.698 [2024-11-20 06:40:29.352705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.698 [2024-11-20 06:40:29.352721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.698 [2024-11-20 06:40:29.352734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe75c000b90 00:29:57.698 [2024-11-20 06:40:29.352767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:57.698 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.362605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.362692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.362724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.362740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.699 [2024-11-20 06:40:29.362753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.699 [2024-11-20 06:40:29.362785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.699 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.372603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.372713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.372741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.372755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.699 [2024-11-20 06:40:29.372768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.699 [2024-11-20 06:40:29.372799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.699 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.382692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.382782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.382810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.382824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.699 [2024-11-20 06:40:29.382837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.699 [2024-11-20 06:40:29.382868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.699 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.392646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.392726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.392758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.392774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.699 [2024-11-20 06:40:29.392786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.699 [2024-11-20 06:40:29.392818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.699 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.402703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.402794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.402821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.402835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.699 [2024-11-20 06:40:29.402847] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.699 [2024-11-20 06:40:29.402878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.699 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.412719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.412841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.412867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.412881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.699 [2024-11-20 06:40:29.412895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.699 [2024-11-20 06:40:29.412927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.699 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.422832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.422918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.422945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.422959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.699 [2024-11-20 06:40:29.422972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.699 [2024-11-20 06:40:29.423004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.699 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.432791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.432874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.432900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.432914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.699 [2024-11-20 06:40:29.432933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.699 [2024-11-20 06:40:29.432963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.699 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.442822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.442912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.442940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.442954] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.699 [2024-11-20 06:40:29.442967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.699 [2024-11-20 06:40:29.443011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.699 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.452868] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.452958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.452985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.452999] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.699 [2024-11-20 06:40:29.453012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.699 [2024-11-20 06:40:29.453042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.699 qpair failed and we were unable to recover it. 00:29:57.699 [2024-11-20 06:40:29.462875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.699 [2024-11-20 06:40:29.462959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.699 [2024-11-20 06:40:29.462986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.699 [2024-11-20 06:40:29.463000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.700 [2024-11-20 06:40:29.463012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.700 [2024-11-20 06:40:29.463044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.700 qpair failed and we were unable to recover it. 00:29:57.700 [2024-11-20 06:40:29.472963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.700 [2024-11-20 06:40:29.473056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.700 [2024-11-20 06:40:29.473082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.700 [2024-11-20 06:40:29.473096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.700 [2024-11-20 06:40:29.473108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.700 [2024-11-20 06:40:29.473139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.700 qpair failed and we were unable to recover it. 00:29:57.700 [2024-11-20 06:40:29.482956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.700 [2024-11-20 06:40:29.483043] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.700 [2024-11-20 06:40:29.483069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.700 [2024-11-20 06:40:29.483083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.700 [2024-11-20 06:40:29.483095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.700 [2024-11-20 06:40:29.483139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.700 qpair failed and we were unable to recover it. 00:29:57.700 [2024-11-20 06:40:29.492973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.700 [2024-11-20 06:40:29.493067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.700 [2024-11-20 06:40:29.493094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.700 [2024-11-20 06:40:29.493108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.700 [2024-11-20 06:40:29.493121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.700 [2024-11-20 06:40:29.493151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.700 qpair failed and we were unable to recover it. 00:29:57.700 [2024-11-20 06:40:29.502975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.700 [2024-11-20 06:40:29.503095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.700 [2024-11-20 06:40:29.503121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.700 [2024-11-20 06:40:29.503134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.700 [2024-11-20 06:40:29.503147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.700 [2024-11-20 06:40:29.503177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.700 qpair failed and we were unable to recover it. 00:29:57.700 [2024-11-20 06:40:29.512976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.700 [2024-11-20 06:40:29.513057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.700 [2024-11-20 06:40:29.513083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.700 [2024-11-20 06:40:29.513097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.700 [2024-11-20 06:40:29.513110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.700 [2024-11-20 06:40:29.513140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.700 qpair failed and we were unable to recover it. 00:29:57.700 [2024-11-20 06:40:29.523024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.700 [2024-11-20 06:40:29.523129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.700 [2024-11-20 06:40:29.523161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.700 [2024-11-20 06:40:29.523176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.700 [2024-11-20 06:40:29.523188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.700 [2024-11-20 06:40:29.523218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.700 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-20 06:40:29.533105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.959 [2024-11-20 06:40:29.533217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.959 [2024-11-20 06:40:29.533243] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.959 [2024-11-20 06:40:29.533256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.959 [2024-11-20 06:40:29.533269] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.959 [2024-11-20 06:40:29.533309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-20 06:40:29.543094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.959 [2024-11-20 06:40:29.543184] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.959 [2024-11-20 06:40:29.543210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.959 [2024-11-20 06:40:29.543224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.959 [2024-11-20 06:40:29.543237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.959 [2024-11-20 06:40:29.543269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-20 06:40:29.553107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.959 [2024-11-20 06:40:29.553192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.959 [2024-11-20 06:40:29.553217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.959 [2024-11-20 06:40:29.553231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.959 [2024-11-20 06:40:29.553244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.959 [2024-11-20 06:40:29.553274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-20 06:40:29.563147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.959 [2024-11-20 06:40:29.563236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.959 [2024-11-20 06:40:29.563261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.959 [2024-11-20 06:40:29.563276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.959 [2024-11-20 06:40:29.563294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.959 [2024-11-20 06:40:29.563336] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-20 06:40:29.573182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.959 [2024-11-20 06:40:29.573268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.959 [2024-11-20 06:40:29.573294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.959 [2024-11-20 06:40:29.573316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.959 [2024-11-20 06:40:29.573330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.959 [2024-11-20 06:40:29.573360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-20 06:40:29.583187] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.959 [2024-11-20 06:40:29.583266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.959 [2024-11-20 06:40:29.583292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.959 [2024-11-20 06:40:29.583316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.959 [2024-11-20 06:40:29.583330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.959 [2024-11-20 06:40:29.583360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-20 06:40:29.593224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.959 [2024-11-20 06:40:29.593318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.959 [2024-11-20 06:40:29.593344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.959 [2024-11-20 06:40:29.593358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.959 [2024-11-20 06:40:29.593371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.959 [2024-11-20 06:40:29.593401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.959 qpair failed and we were unable to recover it. 00:29:57.959 [2024-11-20 06:40:29.603264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.959 [2024-11-20 06:40:29.603368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.959 [2024-11-20 06:40:29.603394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.959 [2024-11-20 06:40:29.603408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.959 [2024-11-20 06:40:29.603420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.603451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.613294] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.613411] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.613437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.613450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.960 [2024-11-20 06:40:29.613463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.613495] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.623332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.623424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.623454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.623470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.960 [2024-11-20 06:40:29.623483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.623514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.633381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.633491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.633517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.633532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.960 [2024-11-20 06:40:29.633544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.633577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.643408] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.643524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.643551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.643565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.960 [2024-11-20 06:40:29.643577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.643607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.653409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.653496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.653527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.653542] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.960 [2024-11-20 06:40:29.653555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.653585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.663437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.663546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.663572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.663586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.960 [2024-11-20 06:40:29.663599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.663631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.673461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.673544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.673570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.673584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.960 [2024-11-20 06:40:29.673596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.673637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.683517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.683604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.683629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.683643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.960 [2024-11-20 06:40:29.683655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.683687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.693533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.693626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.693651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.693671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.960 [2024-11-20 06:40:29.693685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.693716] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.703548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.703630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.703657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.703671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.960 [2024-11-20 06:40:29.703683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.960 [2024-11-20 06:40:29.703713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.960 qpair failed and we were unable to recover it. 00:29:57.960 [2024-11-20 06:40:29.713594] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.960 [2024-11-20 06:40:29.713681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.960 [2024-11-20 06:40:29.713711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.960 [2024-11-20 06:40:29.713726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.961 [2024-11-20 06:40:29.713739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.961 [2024-11-20 06:40:29.713770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.961 qpair failed and we were unable to recover it. 00:29:57.961 [2024-11-20 06:40:29.723614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.961 [2024-11-20 06:40:29.723707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.961 [2024-11-20 06:40:29.723734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.961 [2024-11-20 06:40:29.723747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.961 [2024-11-20 06:40:29.723760] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.961 [2024-11-20 06:40:29.723790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.961 qpair failed and we were unable to recover it. 00:29:57.961 [2024-11-20 06:40:29.733698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.961 [2024-11-20 06:40:29.733777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.961 [2024-11-20 06:40:29.733806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.961 [2024-11-20 06:40:29.733820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.961 [2024-11-20 06:40:29.733833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.961 [2024-11-20 06:40:29.733882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.961 qpair failed and we were unable to recover it. 00:29:57.961 [2024-11-20 06:40:29.743674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.961 [2024-11-20 06:40:29.743753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.961 [2024-11-20 06:40:29.743780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.961 [2024-11-20 06:40:29.743794] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.961 [2024-11-20 06:40:29.743806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.961 [2024-11-20 06:40:29.743838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.961 qpair failed and we were unable to recover it. 00:29:57.961 [2024-11-20 06:40:29.753676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.961 [2024-11-20 06:40:29.753759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.961 [2024-11-20 06:40:29.753786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.961 [2024-11-20 06:40:29.753799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.961 [2024-11-20 06:40:29.753812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.961 [2024-11-20 06:40:29.753841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.961 qpair failed and we were unable to recover it. 00:29:57.961 [2024-11-20 06:40:29.763722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.961 [2024-11-20 06:40:29.763814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.961 [2024-11-20 06:40:29.763840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.961 [2024-11-20 06:40:29.763853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.961 [2024-11-20 06:40:29.763866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.961 [2024-11-20 06:40:29.763896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.961 qpair failed and we were unable to recover it. 00:29:57.961 [2024-11-20 06:40:29.773792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.961 [2024-11-20 06:40:29.773911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.961 [2024-11-20 06:40:29.773938] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.961 [2024-11-20 06:40:29.773952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.961 [2024-11-20 06:40:29.773964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.961 [2024-11-20 06:40:29.773994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.961 qpair failed and we were unable to recover it. 00:29:57.961 [2024-11-20 06:40:29.783854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:57.961 [2024-11-20 06:40:29.783945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:57.961 [2024-11-20 06:40:29.783971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:57.961 [2024-11-20 06:40:29.783984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:57.961 [2024-11-20 06:40:29.783997] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:57.961 [2024-11-20 06:40:29.784027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:57.961 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.793794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.793884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.793910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.793925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.793938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.793968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.803886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.803977] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.804005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.804019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.804032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.804066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.813892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.813980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.814006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.814020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.814033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.814064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.823871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.824004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.824030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.824050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.824064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.824094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.833940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.834020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.834046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.834060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.834073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.834103] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.844000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.844118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.844144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.844158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.844170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.844201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.854085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.854212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.854238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.854252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.854265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.854296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.863999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.864089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.864114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.864129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.864141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.864178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.874016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.874110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.874136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.874150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.874163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.874193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.884078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.884166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.884192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.884205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.884218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.884249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.894098] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.221 [2024-11-20 06:40:29.894213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.221 [2024-11-20 06:40:29.894239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.221 [2024-11-20 06:40:29.894252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.221 [2024-11-20 06:40:29.894265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.221 [2024-11-20 06:40:29.894296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.221 qpair failed and we were unable to recover it. 00:29:58.221 [2024-11-20 06:40:29.904110] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:29.904194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:29.904220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:29.904234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:29.904247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:29.904278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:29.914148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:29.914231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:29.914258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:29.914272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:29.914285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:29.914323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:29.924174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:29.924261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:29.924288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:29.924309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:29.924325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:29.924357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:29.934241] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:29.934335] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:29.934360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:29.934375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:29.934387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:29.934417] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:29.944332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:29.944418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:29.944444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:29.944457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:29.944470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:29.944500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:29.954290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:29.954405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:29.954437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:29.954452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:29.954465] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:29.954496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:29.964296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:29.964397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:29.964423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:29.964436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:29.964449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:29.964481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:29.974332] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:29.974421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:29.974447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:29.974461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:29.974473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:29.974504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:29.984353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:29.984443] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:29.984472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:29.984490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:29.984503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:29.984535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:29.994382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:29.994504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:29.994534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:29.994548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:29.994570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:29.994604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:30.004561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:30.004704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:30.004736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:30.004763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:30.004779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:30.004828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:30.014521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:30.014643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:30.014671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:30.014686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:30.014699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:30.014731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:30.024555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.222 [2024-11-20 06:40:30.024674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.222 [2024-11-20 06:40:30.024701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.222 [2024-11-20 06:40:30.024716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.222 [2024-11-20 06:40:30.024729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.222 [2024-11-20 06:40:30.024760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.222 qpair failed and we were unable to recover it. 00:29:58.222 [2024-11-20 06:40:30.034570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.223 [2024-11-20 06:40:30.034680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.223 [2024-11-20 06:40:30.034710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.223 [2024-11-20 06:40:30.034724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.223 [2024-11-20 06:40:30.034738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.223 [2024-11-20 06:40:30.034772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.223 [2024-11-20 06:40:30.044640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.223 [2024-11-20 06:40:30.044759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.223 [2024-11-20 06:40:30.044787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.223 [2024-11-20 06:40:30.044802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.223 [2024-11-20 06:40:30.044815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.223 [2024-11-20 06:40:30.044846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.223 qpair failed and we were unable to recover it. 00:29:58.481 [2024-11-20 06:40:30.054645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.481 [2024-11-20 06:40:30.054743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.481 [2024-11-20 06:40:30.054770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.054785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.054799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.054832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.064631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.064720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.064748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.064763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.064776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.064809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.074613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.074701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.074728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.074742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.074755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.074786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.084658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.084748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.084781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.084796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.084809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.084840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.094672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.094809] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.094835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.094849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.094862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.094894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.104684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.104768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.104794] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.104808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.104821] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.104852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.114758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.114843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.114869] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.114884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.114897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.114928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.124765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.124874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.124901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.124916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.124935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.124967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.134771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.134861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.134888] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.134902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.134915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.134946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.144799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.144887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.144914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.144928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.144940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.144972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.154887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.155005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.155031] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.155045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.155057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.482 [2024-11-20 06:40:30.155087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.482 qpair failed and we were unable to recover it. 00:29:58.482 [2024-11-20 06:40:30.164962] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.482 [2024-11-20 06:40:30.165053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.482 [2024-11-20 06:40:30.165079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.482 [2024-11-20 06:40:30.165094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.482 [2024-11-20 06:40:30.165107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.165139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.174929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.175021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.175047] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.175061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.175074] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.175107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.184907] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.185000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.185027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.185041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.185053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.185084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.194971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.195093] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.195119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.195134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.195146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.195177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.205033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.205135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.205161] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.205175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.205188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.205219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.215028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.215154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.215186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.215201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.215214] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.215244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.225064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.225149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.225176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.225190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.225203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.225234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.235123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.235211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.235237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.235251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.235265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.235295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.245140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.245242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.245271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.245286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.245299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.245341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.255163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.255282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.255316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.255339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.255353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.255384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.265142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.265225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.265251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.265265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.265277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.265316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.275166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.275259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.275284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.275298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.275325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.275358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.285235] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.285370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.483 [2024-11-20 06:40:30.285396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.483 [2024-11-20 06:40:30.285410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.483 [2024-11-20 06:40:30.285423] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.483 [2024-11-20 06:40:30.285453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.483 qpair failed and we were unable to recover it. 00:29:58.483 [2024-11-20 06:40:30.295243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.483 [2024-11-20 06:40:30.295341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.484 [2024-11-20 06:40:30.295367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.484 [2024-11-20 06:40:30.295381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.484 [2024-11-20 06:40:30.295394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.484 [2024-11-20 06:40:30.295431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.484 qpair failed and we were unable to recover it. 00:29:58.484 [2024-11-20 06:40:30.305274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.484 [2024-11-20 06:40:30.305374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.484 [2024-11-20 06:40:30.305400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.484 [2024-11-20 06:40:30.305415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.484 [2024-11-20 06:40:30.305427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.484 [2024-11-20 06:40:30.305458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.484 qpair failed and we were unable to recover it. 00:29:58.742 [2024-11-20 06:40:30.315324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.742 [2024-11-20 06:40:30.315409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.742 [2024-11-20 06:40:30.315436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.742 [2024-11-20 06:40:30.315450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.742 [2024-11-20 06:40:30.315463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.742 [2024-11-20 06:40:30.315494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-11-20 06:40:30.325340] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.742 [2024-11-20 06:40:30.325432] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.742 [2024-11-20 06:40:30.325458] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.742 [2024-11-20 06:40:30.325472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.742 [2024-11-20 06:40:30.325485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.742 [2024-11-20 06:40:30.325517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-11-20 06:40:30.335443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.742 [2024-11-20 06:40:30.335531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.742 [2024-11-20 06:40:30.335557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.742 [2024-11-20 06:40:30.335572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.742 [2024-11-20 06:40:30.335584] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.742 [2024-11-20 06:40:30.335616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-11-20 06:40:30.345420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.742 [2024-11-20 06:40:30.345538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.742 [2024-11-20 06:40:30.345568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.742 [2024-11-20 06:40:30.345583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.742 [2024-11-20 06:40:30.345597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.742 [2024-11-20 06:40:30.345629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-11-20 06:40:30.355437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.742 [2024-11-20 06:40:30.355522] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.742 [2024-11-20 06:40:30.355548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.742 [2024-11-20 06:40:30.355562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.742 [2024-11-20 06:40:30.355576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.742 [2024-11-20 06:40:30.355608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-11-20 06:40:30.365473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.742 [2024-11-20 06:40:30.365560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.742 [2024-11-20 06:40:30.365587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.742 [2024-11-20 06:40:30.365601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.742 [2024-11-20 06:40:30.365614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fe760000b90 00:29:58.742 [2024-11-20 06:40:30.365645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-11-20 06:40:30.375489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.742 [2024-11-20 06:40:30.375576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.742 [2024-11-20 06:40:30.375609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.742 [2024-11-20 06:40:30.375624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.742 [2024-11-20 06:40:30.375638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x205ffa0 00:29:58.742 [2024-11-20 06:40:30.375669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.742 qpair failed and we were unable to recover it. 00:29:58.742 [2024-11-20 06:40:30.385504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:29:58.742 [2024-11-20 06:40:30.385589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:29:58.742 [2024-11-20 06:40:30.385618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:29:58.743 [2024-11-20 06:40:30.385640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:58.743 [2024-11-20 06:40:30.385654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x205ffa0 00:29:58.743 [2024-11-20 06:40:30.385684] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:58.743 qpair failed and we were unable to recover it. 00:29:58.743 [2024-11-20 06:40:30.385805] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:29:58.743 A controller has encountered a failure and is being reset. 00:29:58.743 Controller properly reset. 00:29:58.743 Initializing NVMe Controllers 00:29:58.743 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.743 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:58.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:29:58.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:29:58.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:29:58.743 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:29:58.743 Initialization complete. Launching workers. 00:29:58.743 Starting thread on core 1 00:29:58.743 Starting thread on core 2 00:29:58.743 Starting thread on core 3 00:29:58.743 Starting thread on core 0 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:29:58.743 00:29:58.743 real 0m10.789s 00:29:58.743 user 0m19.066s 00:29:58.743 sys 0m5.263s 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1128 -- # xtrace_disable 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:58.743 ************************************ 00:29:58.743 END TEST nvmf_target_disconnect_tc2 00:29:58.743 ************************************ 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:58.743 rmmod nvme_tcp 00:29:58.743 rmmod nvme_fabrics 00:29:58.743 rmmod nvme_keyring 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 2206869 ']' 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 2206869 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # '[' -z 2206869 ']' 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # kill -0 2206869 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # uname 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2206869 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # process_name=reactor_4 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@962 -- # '[' reactor_4 = sudo ']' 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2206869' 00:29:58.743 killing process with pid 2206869 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@971 -- # kill 2206869 00:29:58.743 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@976 -- # wait 2206869 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:59.002 06:40:30 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.539 06:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:01.539 00:30:01.539 real 0m15.910s 00:30:01.539 user 0m45.583s 00:30:01.539 sys 0m7.475s 00:30:01.539 06:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:01.539 06:40:32 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:01.539 ************************************ 00:30:01.539 END TEST nvmf_target_disconnect 00:30:01.539 ************************************ 00:30:01.539 06:40:32 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:01.539 00:30:01.539 real 5m8.056s 00:30:01.539 user 10m53.572s 00:30:01.539 sys 1m14.041s 00:30:01.539 06:40:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:01.539 06:40:32 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:01.539 ************************************ 00:30:01.539 END TEST nvmf_host 00:30:01.539 ************************************ 00:30:01.539 06:40:32 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:01.539 06:40:32 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:01.539 06:40:32 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:01.539 06:40:32 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:01.539 06:40:32 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:01.539 06:40:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:01.539 ************************************ 00:30:01.539 START TEST nvmf_target_core_interrupt_mode 00:30:01.539 ************************************ 00:30:01.539 06:40:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:01.539 * Looking for test storage... 00:30:01.539 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lcov --version 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:01.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.539 --rc genhtml_branch_coverage=1 00:30:01.539 --rc genhtml_function_coverage=1 00:30:01.539 --rc genhtml_legend=1 00:30:01.539 --rc geninfo_all_blocks=1 00:30:01.539 --rc geninfo_unexecuted_blocks=1 00:30:01.539 00:30:01.539 ' 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:01.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.539 --rc genhtml_branch_coverage=1 00:30:01.539 --rc genhtml_function_coverage=1 00:30:01.539 --rc genhtml_legend=1 00:30:01.539 --rc geninfo_all_blocks=1 00:30:01.539 --rc geninfo_unexecuted_blocks=1 00:30:01.539 00:30:01.539 ' 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:01.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.539 --rc genhtml_branch_coverage=1 00:30:01.539 --rc genhtml_function_coverage=1 00:30:01.539 --rc genhtml_legend=1 00:30:01.539 --rc geninfo_all_blocks=1 00:30:01.539 --rc geninfo_unexecuted_blocks=1 00:30:01.539 00:30:01.539 ' 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:01.539 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.539 --rc genhtml_branch_coverage=1 00:30:01.539 --rc genhtml_function_coverage=1 00:30:01.539 --rc genhtml_legend=1 00:30:01.539 --rc geninfo_all_blocks=1 00:30:01.539 --rc geninfo_unexecuted_blocks=1 00:30:01.539 00:30:01.539 ' 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.539 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:01.540 ************************************ 00:30:01.540 START TEST nvmf_abort 00:30:01.540 ************************************ 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:01.540 * Looking for test storage... 00:30:01.540 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lcov --version 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:01.540 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:01.540 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.540 --rc genhtml_branch_coverage=1 00:30:01.540 --rc genhtml_function_coverage=1 00:30:01.540 --rc genhtml_legend=1 00:30:01.541 --rc geninfo_all_blocks=1 00:30:01.541 --rc geninfo_unexecuted_blocks=1 00:30:01.541 00:30:01.541 ' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:01.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.541 --rc genhtml_branch_coverage=1 00:30:01.541 --rc genhtml_function_coverage=1 00:30:01.541 --rc genhtml_legend=1 00:30:01.541 --rc geninfo_all_blocks=1 00:30:01.541 --rc geninfo_unexecuted_blocks=1 00:30:01.541 00:30:01.541 ' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:01.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.541 --rc genhtml_branch_coverage=1 00:30:01.541 --rc genhtml_function_coverage=1 00:30:01.541 --rc genhtml_legend=1 00:30:01.541 --rc geninfo_all_blocks=1 00:30:01.541 --rc geninfo_unexecuted_blocks=1 00:30:01.541 00:30:01.541 ' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:01.541 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:01.541 --rc genhtml_branch_coverage=1 00:30:01.541 --rc genhtml_function_coverage=1 00:30:01.541 --rc genhtml_legend=1 00:30:01.541 --rc geninfo_all_blocks=1 00:30:01.541 --rc geninfo_unexecuted_blocks=1 00:30:01.541 00:30:01.541 ' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:01.541 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:01.542 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:01.542 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:30:01.542 06:40:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:04.076 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:04.077 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:04.077 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:04.077 Found net devices under 0000:09:00.0: cvl_0_0 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:04.077 Found net devices under 0000:09:00.1: cvl_0_1 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:04.077 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.077 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:30:04.077 00:30:04.077 --- 10.0.0.2 ping statistics --- 00:30:04.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.077 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.077 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.077 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:30:04.077 00:30:04.077 --- 10.0.0.1 ping statistics --- 00:30:04.077 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.077 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=2209684 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 2209684 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@833 -- # '[' -z 2209684 ']' 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:04.077 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.078 [2024-11-20 06:40:35.614071] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:04.078 [2024-11-20 06:40:35.615113] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:30:04.078 [2024-11-20 06:40:35.615167] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.078 [2024-11-20 06:40:35.703146] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:04.078 [2024-11-20 06:40:35.775609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.078 [2024-11-20 06:40:35.775665] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.078 [2024-11-20 06:40:35.775705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.078 [2024-11-20 06:40:35.775735] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.078 [2024-11-20 06:40:35.775756] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.078 [2024-11-20 06:40:35.777545] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.078 [2024-11-20 06:40:35.777619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.078 [2024-11-20 06:40:35.777609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.078 [2024-11-20 06:40:35.880429] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:04.078 [2024-11-20 06:40:35.880691] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:04.078 [2024-11-20 06:40:35.880726] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:04.078 [2024-11-20 06:40:35.881039] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@866 -- # return 0 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.336 [2024-11-20 06:40:35.974424] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.336 06:40:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.336 Malloc0 00:30:04.336 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.336 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:04.336 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.336 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.336 Delay0 00:30:04.336 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.336 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:04.336 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.336 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.337 [2024-11-20 06:40:36.046598] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:04.337 06:40:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:04.595 [2024-11-20 06:40:36.196415] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:07.124 Initializing NVMe Controllers 00:30:07.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:07.124 controller IO queue size 128 less than required 00:30:07.124 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:07.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:07.124 Initialization complete. Launching workers. 00:30:07.124 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 28393 00:30:07.124 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 28450, failed to submit 66 00:30:07.124 success 28393, unsuccessful 57, failed 0 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:07.124 rmmod nvme_tcp 00:30:07.124 rmmod nvme_fabrics 00:30:07.124 rmmod nvme_keyring 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 2209684 ']' 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 2209684 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@952 -- # '[' -z 2209684 ']' 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@956 -- # kill -0 2209684 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # uname 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2209684 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2209684' 00:30:07.124 killing process with pid 2209684 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@971 -- # kill 2209684 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@976 -- # wait 2209684 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.124 06:40:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.025 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:09.025 00:30:09.025 real 0m7.646s 00:30:09.025 user 0m10.045s 00:30:09.025 sys 0m2.946s 00:30:09.025 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:09.025 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:09.025 ************************************ 00:30:09.025 END TEST nvmf_abort 00:30:09.025 ************************************ 00:30:09.025 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:09.025 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:09.025 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:09.025 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:09.025 ************************************ 00:30:09.025 START TEST nvmf_ns_hotplug_stress 00:30:09.025 ************************************ 00:30:09.025 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:09.284 * Looking for test storage... 00:30:09.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:09.284 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lcov --version 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:09.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.285 --rc genhtml_branch_coverage=1 00:30:09.285 --rc genhtml_function_coverage=1 00:30:09.285 --rc genhtml_legend=1 00:30:09.285 --rc geninfo_all_blocks=1 00:30:09.285 --rc geninfo_unexecuted_blocks=1 00:30:09.285 00:30:09.285 ' 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:09.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.285 --rc genhtml_branch_coverage=1 00:30:09.285 --rc genhtml_function_coverage=1 00:30:09.285 --rc genhtml_legend=1 00:30:09.285 --rc geninfo_all_blocks=1 00:30:09.285 --rc geninfo_unexecuted_blocks=1 00:30:09.285 00:30:09.285 ' 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:09.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.285 --rc genhtml_branch_coverage=1 00:30:09.285 --rc genhtml_function_coverage=1 00:30:09.285 --rc genhtml_legend=1 00:30:09.285 --rc geninfo_all_blocks=1 00:30:09.285 --rc geninfo_unexecuted_blocks=1 00:30:09.285 00:30:09.285 ' 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:09.285 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:09.285 --rc genhtml_branch_coverage=1 00:30:09.285 --rc genhtml_function_coverage=1 00:30:09.285 --rc genhtml_legend=1 00:30:09.285 --rc geninfo_all_blocks=1 00:30:09.285 --rc geninfo_unexecuted_blocks=1 00:30:09.285 00:30:09.285 ' 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:09.285 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:30:09.286 06:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:11.277 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.277 06:40:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:11.277 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:11.277 Found net devices under 0000:09:00.0: cvl_0_0 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.277 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:11.278 Found net devices under 0000:09:00.1: cvl_0_1 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:11.278 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:11.536 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:11.536 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:30:11.536 00:30:11.536 --- 10.0.0.2 ping statistics --- 00:30:11.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.536 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:11.536 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:11.536 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:30:11.536 00:30:11.536 --- 10.0.0.1 ping statistics --- 00:30:11.536 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:11.536 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:11.536 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=2211932 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 2211932 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # '[' -z 2211932 ']' 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:11.537 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:11.537 [2024-11-20 06:40:43.216578] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:11.537 [2024-11-20 06:40:43.217796] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:30:11.537 [2024-11-20 06:40:43.217854] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:11.537 [2024-11-20 06:40:43.295595] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:11.537 [2024-11-20 06:40:43.357531] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:11.537 [2024-11-20 06:40:43.357586] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:11.537 [2024-11-20 06:40:43.357608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:11.537 [2024-11-20 06:40:43.357626] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:11.537 [2024-11-20 06:40:43.357642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:11.537 [2024-11-20 06:40:43.359292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:11.537 [2024-11-20 06:40:43.359344] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:11.537 [2024-11-20 06:40:43.359348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:11.796 [2024-11-20 06:40:43.463153] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:11.796 [2024-11-20 06:40:43.463395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:11.796 [2024-11-20 06:40:43.463432] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:11.796 [2024-11-20 06:40:43.463736] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:11.796 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:11.796 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@866 -- # return 0 00:30:11.796 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:11.796 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:11.796 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:11.796 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:11.796 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:11.796 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:12.056 [2024-11-20 06:40:43.824103] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.056 06:40:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:12.626 06:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.884 [2024-11-20 06:40:44.488569] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.884 06:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:13.142 06:40:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:13.400 Malloc0 00:30:13.400 06:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:13.657 Delay0 00:30:13.915 06:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:14.173 06:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:14.432 NULL1 00:30:14.432 06:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:14.690 06:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2212335 00:30:14.690 06:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:14.690 06:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:14.690 06:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:16.066 Read completed with error (sct=0, sc=11) 00:30:16.066 06:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:16.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.066 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.323 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:16.323 06:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:16.323 06:40:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:16.581 true 00:30:16.581 06:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:16.581 06:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:17.516 06:40:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:17.516 06:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:17.516 06:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:17.773 true 00:30:17.773 06:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:17.773 06:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.030 06:40:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:18.288 06:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:18.288 06:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:18.546 true 00:30:18.546 06:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:18.546 06:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:18.804 06:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:19.369 06:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:19.369 06:40:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:19.369 true 00:30:19.369 06:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:19.369 06:40:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:20.302 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.560 06:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:20.560 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:20.817 06:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:20.817 06:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:21.075 true 00:30:21.075 06:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:21.075 06:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:21.332 06:40:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:21.590 06:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:21.590 06:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:21.847 true 00:30:21.847 06:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:21.847 06:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:22.105 06:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:22.362 06:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:30:22.362 06:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:30:22.620 true 00:30:22.620 06:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:22.620 06:40:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:23.553 06:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:23.810 06:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:30:23.810 06:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:30:24.068 true 00:30:24.325 06:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:24.325 06:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:24.583 06:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:24.841 06:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:30:24.841 06:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:30:25.100 true 00:30:25.100 06:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:25.100 06:40:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:25.357 06:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:25.615 06:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:30:25.615 06:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:30:25.872 true 00:30:25.872 06:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:25.873 06:40:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:26.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.805 06:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:26.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:26.805 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:27.063 06:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:30:27.063 06:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:30:27.321 true 00:30:27.321 06:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:27.321 06:40:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:27.591 06:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:27.856 06:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:30:27.856 06:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:30:28.114 true 00:30:28.114 06:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:28.114 06:40:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.047 06:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.047 06:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:30:29.047 06:41:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:30:29.305 true 00:30:29.305 06:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:29.305 06:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:29.562 06:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:29.820 06:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:30:29.820 06:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:30:30.078 true 00:30:30.078 06:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:30.078 06:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:30.335 06:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:30.900 06:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:30:30.900 06:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:30:30.900 true 00:30:30.901 06:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:30.901 06:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:31.833 06:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.091 06:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:30:32.091 06:41:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:30:32.349 true 00:30:32.349 06:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:32.349 06:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:32.915 06:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:32.915 06:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:30:32.915 06:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:30:33.172 true 00:30:33.172 06:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:33.172 06:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:33.430 06:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:33.995 06:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:30:33.995 06:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:30:33.995 true 00:30:33.995 06:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:33.995 06:41:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:34.929 06:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:35.186 06:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:30:35.186 06:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:30:35.443 true 00:30:35.443 06:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:35.443 06:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.010 06:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.010 06:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:30:36.010 06:41:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:30:36.268 true 00:30:36.268 06:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:36.268 06:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:36.834 06:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:36.834 06:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:30:36.834 06:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:30:37.092 true 00:30:37.092 06:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:37.092 06:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:38.024 06:41:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:38.589 06:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:30:38.589 06:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:30:38.589 true 00:30:38.589 06:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:38.589 06:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.155 06:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.155 06:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:30:39.155 06:41:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:30:39.413 true 00:30:39.413 06:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:39.413 06:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:39.978 06:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:39.978 06:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:30:39.978 06:41:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:30:40.235 true 00:30:40.235 06:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:40.235 06:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:41.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.227 06:41:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:41.227 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.484 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:41.484 06:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:30:41.484 06:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:30:41.742 true 00:30:41.742 06:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:41.742 06:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:42.000 06:41:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:42.258 06:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:30:42.258 06:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:30:42.516 true 00:30:42.516 06:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:42.516 06:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:43.081 06:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:43.081 06:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:30:43.081 06:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:30:43.339 true 00:30:43.339 06:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:43.339 06:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:44.711 06:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:44.711 06:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:30:44.711 06:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:30:44.969 true 00:30:44.969 Initializing NVMe Controllers 00:30:44.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:44.969 Controller IO queue size 128, less than required. 00:30:44.969 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:44.969 Controller IO queue size 128, less than required. 00:30:44.969 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:44.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:44.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:44.969 Initialization complete. Launching workers. 00:30:44.969 ======================================================== 00:30:44.969 Latency(us) 00:30:44.969 Device Information : IOPS MiB/s Average min max 00:30:44.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 594.35 0.29 88516.78 3396.11 1014598.32 00:30:44.969 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 8655.48 4.23 14787.96 1739.99 538116.95 00:30:44.969 ======================================================== 00:30:44.969 Total : 9249.83 4.52 19525.43 1739.99 1014598.32 00:30:44.969 00:30:44.969 06:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:44.970 06:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:45.227 06:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:45.485 06:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:30:45.485 06:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:30:45.743 true 00:30:45.743 06:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2212335 00:30:45.743 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2212335) - No such process 00:30:45.743 06:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2212335 00:30:45.743 06:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:46.001 06:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:46.569 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:30:46.569 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:30:46.569 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:30:46.569 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.569 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:30:46.569 null0 00:30:46.569 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.569 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.569 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:30:46.826 null1 00:30:46.827 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:46.827 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:46.827 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:30:47.084 null2 00:30:47.084 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.084 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.085 06:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:30:47.650 null3 00:30:47.650 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.650 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.650 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:30:47.650 null4 00:30:47.650 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.650 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.650 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:30:47.908 null5 00:30:47.908 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:47.908 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:47.908 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:30:48.166 null6 00:30:48.166 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:48.166 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:48.166 06:41:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:30:48.732 null7 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:30:48.732 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2216466 2216467 2216469 2216471 2216473 2216475 2216477 2216479 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:48.733 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:48.992 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:48.992 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:48.992 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:48.992 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:48.992 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:48.992 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:48.992 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:48.992 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.250 06:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.508 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:49.508 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:49.508 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:49.508 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:49.508 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:49.508 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:49.508 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:49.508 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.766 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:49.767 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:50.026 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:50.026 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:50.026 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:50.026 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.026 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.026 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:50.026 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:50.026 06:41:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:50.284 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.284 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.284 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:50.284 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.284 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.285 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:50.851 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:51.109 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.109 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.109 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:51.109 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.109 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.110 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:51.367 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:51.367 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:51.367 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:51.367 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:51.367 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.367 06:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:51.367 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:51.367 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:51.625 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:51.883 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:51.883 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:51.883 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:51.883 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:51.883 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:51.884 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:51.884 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:51.884 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.142 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.143 06:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:52.401 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:52.401 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:52.401 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:52.401 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:52.401 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:52.401 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:52.401 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:52.401 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:52.659 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.659 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.659 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:52.659 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.659 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.659 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:52.659 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.917 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.917 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:52.917 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.917 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.917 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:52.917 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:52.918 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:53.175 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:53.176 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:53.176 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:53.176 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:53.176 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:53.176 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.176 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:53.176 06:41:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.434 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:53.692 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:53.692 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:53.692 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:53.692 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:53.692 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:53.692 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:53.692 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:53.692 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:53.950 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:30:54.208 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:30:54.208 06:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:30:54.208 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:30:54.208 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:30:54.208 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:30:54.208 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.208 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:30:54.208 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:30:54.466 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:54.466 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:54.466 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:54.466 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:54.466 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:54.466 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:54.725 rmmod nvme_tcp 00:30:54.725 rmmod nvme_fabrics 00:30:54.725 rmmod nvme_keyring 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 2211932 ']' 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 2211932 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # '[' -z 2211932 ']' 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # kill -0 2211932 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # uname 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2211932 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2211932' 00:30:54.725 killing process with pid 2211932 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@971 -- # kill 2211932 00:30:54.725 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@976 -- # wait 2211932 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:54.984 06:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:57.518 00:30:57.518 real 0m47.921s 00:30:57.518 user 3m19.575s 00:30:57.518 sys 0m21.994s 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1128 -- # xtrace_disable 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:57.518 ************************************ 00:30:57.518 END TEST nvmf_ns_hotplug_stress 00:30:57.518 ************************************ 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:57.518 ************************************ 00:30:57.518 START TEST nvmf_delete_subsystem 00:30:57.518 ************************************ 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:30:57.518 * Looking for test storage... 00:30:57.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lcov --version 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:30:57.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.518 --rc genhtml_branch_coverage=1 00:30:57.518 --rc genhtml_function_coverage=1 00:30:57.518 --rc genhtml_legend=1 00:30:57.518 --rc geninfo_all_blocks=1 00:30:57.518 --rc geninfo_unexecuted_blocks=1 00:30:57.518 00:30:57.518 ' 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:30:57.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.518 --rc genhtml_branch_coverage=1 00:30:57.518 --rc genhtml_function_coverage=1 00:30:57.518 --rc genhtml_legend=1 00:30:57.518 --rc geninfo_all_blocks=1 00:30:57.518 --rc geninfo_unexecuted_blocks=1 00:30:57.518 00:30:57.518 ' 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:30:57.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.518 --rc genhtml_branch_coverage=1 00:30:57.518 --rc genhtml_function_coverage=1 00:30:57.518 --rc genhtml_legend=1 00:30:57.518 --rc geninfo_all_blocks=1 00:30:57.518 --rc geninfo_unexecuted_blocks=1 00:30:57.518 00:30:57.518 ' 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:30:57.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:57.518 --rc genhtml_branch_coverage=1 00:30:57.518 --rc genhtml_function_coverage=1 00:30:57.518 --rc genhtml_legend=1 00:30:57.518 --rc geninfo_all_blocks=1 00:30:57.518 --rc geninfo_unexecuted_blocks=1 00:30:57.518 00:30:57.518 ' 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:57.518 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:30:57.519 06:41:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:30:59.418 Found 0000:09:00.0 (0x8086 - 0x159b) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:30:59.418 Found 0000:09:00.1 (0x8086 - 0x159b) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:30:59.418 Found net devices under 0000:09:00.0: cvl_0_0 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:30:59.418 Found net devices under 0000:09:00.1: cvl_0_1 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:59.418 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:59.419 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:59.419 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:30:59.419 00:30:59.419 --- 10.0.0.2 ping statistics --- 00:30:59.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.419 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:59.419 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:59.419 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:30:59.419 00:30:59.419 --- 10.0.0.1 ping statistics --- 00:30:59.419 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:59.419 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:59.419 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=2219231 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 2219231 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # '[' -z 2219231 ']' 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # local max_retries=100 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # xtrace_disable 00:30:59.677 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.677 [2024-11-20 06:41:31.304443] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:59.677 [2024-11-20 06:41:31.305516] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:30:59.677 [2024-11-20 06:41:31.305582] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:59.677 [2024-11-20 06:41:31.376189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:59.677 [2024-11-20 06:41:31.432090] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:59.677 [2024-11-20 06:41:31.432144] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:59.677 [2024-11-20 06:41:31.432174] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:59.677 [2024-11-20 06:41:31.432186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:59.677 [2024-11-20 06:41:31.432196] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:59.677 [2024-11-20 06:41:31.433683] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:59.677 [2024-11-20 06:41:31.433689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:59.936 [2024-11-20 06:41:31.528380] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:59.936 [2024-11-20 06:41:31.528410] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:59.936 [2024-11-20 06:41:31.528677] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@866 -- # return 0 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.936 [2024-11-20 06:41:31.582382] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.936 [2024-11-20 06:41:31.602637] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.936 NULL1 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.936 Delay0 00:30:59.936 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.937 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.937 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:30:59.937 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:30:59.937 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:30:59.937 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2219368 00:30:59.937 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:30:59.937 06:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:30:59.937 [2024-11-20 06:41:31.683358] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:01.836 06:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:01.836 06:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:01.836 06:41:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 [2024-11-20 06:41:33.893616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f680 is same with the state(6) to be set 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 starting I/O failed: -6 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 [2024-11-20 06:41:33.895348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9bf8000c40 is same with the state(6) to be set 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.095 Write completed with error (sct=0, sc=8) 00:31:02.095 Read completed with error (sct=0, sc=8) 00:31:02.096 Read completed with error (sct=0, sc=8) 00:31:02.096 Write completed with error (sct=0, sc=8) 00:31:02.096 Write completed with error (sct=0, sc=8) 00:31:02.096 Read completed with error (sct=0, sc=8) 00:31:02.096 Read completed with error (sct=0, sc=8) 00:31:02.096 Write completed with error (sct=0, sc=8) 00:31:02.096 Read completed with error (sct=0, sc=8) 00:31:02.096 Read completed with error (sct=0, sc=8) 00:31:02.096 Read completed with error (sct=0, sc=8) 00:31:02.096 Read completed with error (sct=0, sc=8) 00:31:02.096 Write completed with error (sct=0, sc=8) 00:31:02.096 Read completed with error (sct=0, sc=8) 00:31:02.096 Read completed with error (sct=0, sc=8) 00:31:03.468 [2024-11-20 06:41:34.863184] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14609a0 is same with the state(6) to be set 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 [2024-11-20 06:41:34.896692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9bf800d020 is same with the state(6) to be set 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 [2024-11-20 06:41:34.896926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9bf800d680 is same with the state(6) to be set 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 [2024-11-20 06:41:34.897481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f4a0 is same with the state(6) to be set 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 Write completed with error (sct=0, sc=8) 00:31:03.468 Read completed with error (sct=0, sc=8) 00:31:03.468 [2024-11-20 06:41:34.898205] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x145f860 is same with the state(6) to be set 00:31:03.468 Initializing NVMe Controllers 00:31:03.468 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:03.468 Controller IO queue size 128, less than required. 00:31:03.468 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:03.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:03.468 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:03.468 Initialization complete. Launching workers. 00:31:03.468 ======================================================== 00:31:03.468 Latency(us) 00:31:03.468 Device Information : IOPS MiB/s Average min max 00:31:03.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 158.39 0.08 921190.01 373.86 1011817.16 00:31:03.468 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 158.89 0.08 1002888.71 391.25 2003561.32 00:31:03.468 ======================================================== 00:31:03.468 Total : 317.28 0.15 962103.29 373.86 2003561.32 00:31:03.468 00:31:03.468 [2024-11-20 06:41:34.899015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14609a0 (9): Bad file descriptor 00:31:03.468 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:03.468 06:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.468 06:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:03.468 06:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2219368 00:31:03.468 06:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2219368 00:31:03.727 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2219368) - No such process 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2219368 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2219368 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2219368 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.727 [2024-11-20 06:41:35.422514] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2219778 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219778 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:03.727 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:03.727 [2024-11-20 06:41:35.488510] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:04.292 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:04.293 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219778 00:31:04.293 06:41:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:04.860 06:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:04.861 06:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219778 00:31:04.861 06:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:05.118 06:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:05.118 06:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219778 00:31:05.118 06:41:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:05.683 06:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:05.683 06:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219778 00:31:05.683 06:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:06.246 06:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:06.246 06:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219778 00:31:06.246 06:41:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:06.809 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:06.809 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219778 00:31:06.809 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:07.066 Initializing NVMe Controllers 00:31:07.066 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:07.066 Controller IO queue size 128, less than required. 00:31:07.067 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:07.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:07.067 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:07.067 Initialization complete. Launching workers. 00:31:07.067 ======================================================== 00:31:07.067 Latency(us) 00:31:07.067 Device Information : IOPS MiB/s Average min max 00:31:07.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1005208.41 1000206.60 1043556.67 00:31:07.067 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005124.77 1000201.87 1042046.26 00:31:07.067 ======================================================== 00:31:07.067 Total : 256.00 0.12 1005166.59 1000201.87 1043556.67 00:31:07.067 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2219778 00:31:07.324 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2219778) - No such process 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2219778 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:07.324 06:41:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:07.324 rmmod nvme_tcp 00:31:07.324 rmmod nvme_fabrics 00:31:07.324 rmmod nvme_keyring 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 2219231 ']' 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 2219231 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # '[' -z 2219231 ']' 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # kill -0 2219231 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # uname 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2219231 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2219231' 00:31:07.324 killing process with pid 2219231 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@971 -- # kill 2219231 00:31:07.324 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@976 -- # wait 2219231 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:07.580 06:41:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:10.110 00:31:10.110 real 0m12.547s 00:31:10.110 user 0m25.105s 00:31:10.110 sys 0m3.796s 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:10.110 ************************************ 00:31:10.110 END TEST nvmf_delete_subsystem 00:31:10.110 ************************************ 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:10.110 ************************************ 00:31:10.110 START TEST nvmf_host_management 00:31:10.110 ************************************ 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:10.110 * Looking for test storage... 00:31:10.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lcov --version 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:10.110 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:10.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.111 --rc genhtml_branch_coverage=1 00:31:10.111 --rc genhtml_function_coverage=1 00:31:10.111 --rc genhtml_legend=1 00:31:10.111 --rc geninfo_all_blocks=1 00:31:10.111 --rc geninfo_unexecuted_blocks=1 00:31:10.111 00:31:10.111 ' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:10.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.111 --rc genhtml_branch_coverage=1 00:31:10.111 --rc genhtml_function_coverage=1 00:31:10.111 --rc genhtml_legend=1 00:31:10.111 --rc geninfo_all_blocks=1 00:31:10.111 --rc geninfo_unexecuted_blocks=1 00:31:10.111 00:31:10.111 ' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:10.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.111 --rc genhtml_branch_coverage=1 00:31:10.111 --rc genhtml_function_coverage=1 00:31:10.111 --rc genhtml_legend=1 00:31:10.111 --rc geninfo_all_blocks=1 00:31:10.111 --rc geninfo_unexecuted_blocks=1 00:31:10.111 00:31:10.111 ' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:10.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:10.111 --rc genhtml_branch_coverage=1 00:31:10.111 --rc genhtml_function_coverage=1 00:31:10.111 --rc genhtml_legend=1 00:31:10.111 --rc geninfo_all_blocks=1 00:31:10.111 --rc geninfo_unexecuted_blocks=1 00:31:10.111 00:31:10.111 ' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:31:10.111 06:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:12.013 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:12.014 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:12.014 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:12.014 Found net devices under 0000:09:00.0: cvl_0_0 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:12.014 Found net devices under 0000:09:00.1: cvl_0_1 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:12.014 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:12.014 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:31:12.014 00:31:12.014 --- 10.0.0.2 ping statistics --- 00:31:12.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.014 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:12.014 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:12.014 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:31:12.014 00:31:12.014 --- 10.0.0.1 ping statistics --- 00:31:12.014 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:12.014 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:12.014 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=2222114 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 2222114 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2222114 ']' 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:12.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:12.015 06:41:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.015 [2024-11-20 06:41:43.769527] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:12.015 [2024-11-20 06:41:43.770555] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:31:12.015 [2024-11-20 06:41:43.770614] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:12.015 [2024-11-20 06:41:43.838638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:12.273 [2024-11-20 06:41:43.896806] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:12.273 [2024-11-20 06:41:43.896872] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:12.273 [2024-11-20 06:41:43.896893] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:12.273 [2024-11-20 06:41:43.896910] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:12.273 [2024-11-20 06:41:43.896924] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:12.273 [2024-11-20 06:41:43.898547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:12.273 [2024-11-20 06:41:43.898610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:12.273 [2024-11-20 06:41:43.898661] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:12.273 [2024-11-20 06:41:43.898664] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:12.273 [2024-11-20 06:41:43.985126] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:12.273 [2024-11-20 06:41:43.985354] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:12.273 [2024-11-20 06:41:43.985671] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:12.273 [2024-11-20 06:41:43.986251] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:12.273 [2024-11-20 06:41:43.986548] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.273 [2024-11-20 06:41:44.039420] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.273 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.559 Malloc0 00:31:12.559 [2024-11-20 06:41:44.127564] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2222271 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2222271 /var/tmp/bdevperf.sock 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@833 -- # '[' -z 2222271 ']' 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:12.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:12.559 { 00:31:12.559 "params": { 00:31:12.559 "name": "Nvme$subsystem", 00:31:12.559 "trtype": "$TEST_TRANSPORT", 00:31:12.559 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:12.559 "adrfam": "ipv4", 00:31:12.559 "trsvcid": "$NVMF_PORT", 00:31:12.559 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:12.559 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:12.559 "hdgst": ${hdgst:-false}, 00:31:12.559 "ddgst": ${ddgst:-false} 00:31:12.559 }, 00:31:12.559 "method": "bdev_nvme_attach_controller" 00:31:12.559 } 00:31:12.559 EOF 00:31:12.559 )") 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:12.559 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:12.559 "params": { 00:31:12.559 "name": "Nvme0", 00:31:12.559 "trtype": "tcp", 00:31:12.559 "traddr": "10.0.0.2", 00:31:12.559 "adrfam": "ipv4", 00:31:12.559 "trsvcid": "4420", 00:31:12.559 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:12.559 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:12.559 "hdgst": false, 00:31:12.559 "ddgst": false 00:31:12.559 }, 00:31:12.559 "method": "bdev_nvme_attach_controller" 00:31:12.559 }' 00:31:12.559 [2024-11-20 06:41:44.214982] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:31:12.559 [2024-11-20 06:41:44.215058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222271 ] 00:31:12.559 [2024-11-20 06:41:44.284719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:12.559 [2024-11-20 06:41:44.345202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:12.841 Running I/O for 10 seconds... 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@866 -- # return 0 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:31:12.841 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:31:13.099 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:31:13.099 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:13.099 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:13.099 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:13.099 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.099 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:13.358 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.358 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=543 00:31:13.358 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 543 -ge 100 ']' 00:31:13.358 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:13.358 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:13.358 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:13.359 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:13.359 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.359 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:13.359 [2024-11-20 06:41:44.967403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967503] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967573] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967706] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967785] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967852] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.967875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22983a0 is same with the state(6) to be set 00:31:13.359 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.359 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:13.359 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:31:13.359 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:13.359 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:31:13.359 06:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:13.359 [2024-11-20 06:41:44.981758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.359 [2024-11-20 06:41:44.981800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.981819] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.359 [2024-11-20 06:41:44.981834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.981848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.359 [2024-11-20 06:41:44.981861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.981875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:13.359 [2024-11-20 06:41:44.981889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.981902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac4a40 is same with the state(6) to be set 00:31:13.359 [2024-11-20 06:41:44.981993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:82048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:82176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:82816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:83072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:83200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.359 [2024-11-20 06:41:44.982441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:83328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.359 [2024-11-20 06:41:44.982454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:83456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:83584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:83712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:83840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:83968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:84224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:84352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:84480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:84608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:84736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:84864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:84992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:85504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.982975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.982989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:86016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:86656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:86784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:86912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:87040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:87168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:87296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:87424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:87552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:87680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:87808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:87936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:88064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.360 [2024-11-20 06:41:44.983579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:88320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.360 [2024-11-20 06:41:44.983593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:88448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:88576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:89088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:89216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:89344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:89472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:89600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.983898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:89728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.361 [2024-11-20 06:41:44.983911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.361 [2024-11-20 06:41:44.985124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:13.361 task offset: 81664 on job bdev=Nvme0n1 fails 00:31:13.361 00:31:13.361 Latency(us) 00:31:13.361 [2024-11-20T05:41:45.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:13.361 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:13.361 Job: Nvme0n1 ended in about 0.42 seconds with error 00:31:13.361 Verification LBA range: start 0x0 length 0x400 00:31:13.361 Nvme0n1 : 0.42 1533.55 95.85 153.84 0.00 36871.17 2585.03 34758.35 00:31:13.361 [2024-11-20T05:41:45.197Z] =================================================================================================================== 00:31:13.361 [2024-11-20T05:41:45.197Z] Total : 1533.55 95.85 153.84 0.00 36871.17 2585.03 34758.35 00:31:13.361 [2024-11-20 06:41:44.987018] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:13.361 [2024-11-20 06:41:44.987046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac4a40 (9): Bad file descriptor 00:31:13.361 [2024-11-20 06:41:44.991175] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2222271 00:31:14.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2222271) - No such process 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:31:14.293 { 00:31:14.293 "params": { 00:31:14.293 "name": "Nvme$subsystem", 00:31:14.293 "trtype": "$TEST_TRANSPORT", 00:31:14.293 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:14.293 "adrfam": "ipv4", 00:31:14.293 "trsvcid": "$NVMF_PORT", 00:31:14.293 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:14.293 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:14.293 "hdgst": ${hdgst:-false}, 00:31:14.293 "ddgst": ${ddgst:-false} 00:31:14.293 }, 00:31:14.293 "method": "bdev_nvme_attach_controller" 00:31:14.293 } 00:31:14.293 EOF 00:31:14.293 )") 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:31:14.293 06:41:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:31:14.293 "params": { 00:31:14.293 "name": "Nvme0", 00:31:14.293 "trtype": "tcp", 00:31:14.293 "traddr": "10.0.0.2", 00:31:14.293 "adrfam": "ipv4", 00:31:14.293 "trsvcid": "4420", 00:31:14.293 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:14.293 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:14.293 "hdgst": false, 00:31:14.293 "ddgst": false 00:31:14.293 }, 00:31:14.293 "method": "bdev_nvme_attach_controller" 00:31:14.293 }' 00:31:14.293 [2024-11-20 06:41:46.029049] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:31:14.293 [2024-11-20 06:41:46.029128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2222438 ] 00:31:14.293 [2024-11-20 06:41:46.099169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:14.601 [2024-11-20 06:41:46.161219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:14.601 Running I/O for 1 seconds... 00:31:15.975 1664.00 IOPS, 104.00 MiB/s 00:31:15.975 Latency(us) 00:31:15.975 [2024-11-20T05:41:47.811Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:15.975 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:15.975 Verification LBA range: start 0x0 length 0x400 00:31:15.975 Nvme0n1 : 1.02 1691.00 105.69 0.00 0.00 37144.03 7330.32 33399.09 00:31:15.975 [2024-11-20T05:41:47.811Z] =================================================================================================================== 00:31:15.975 [2024-11-20T05:41:47.811Z] Total : 1691.00 105.69 0.00 0.00 37144.03 7330.32 33399.09 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:15.975 rmmod nvme_tcp 00:31:15.975 rmmod nvme_fabrics 00:31:15.975 rmmod nvme_keyring 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 2222114 ']' 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 2222114 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@952 -- # '[' -z 2222114 ']' 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@956 -- # kill -0 2222114 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # uname 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2222114 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2222114' 00:31:15.975 killing process with pid 2222114 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@971 -- # kill 2222114 00:31:15.975 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@976 -- # wait 2222114 00:31:16.234 [2024-11-20 06:41:47.930206] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:16.234 06:41:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:31:18.762 00:31:18.762 real 0m8.621s 00:31:18.762 user 0m17.230s 00:31:18.762 sys 0m3.546s 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:18.762 ************************************ 00:31:18.762 END TEST nvmf_host_management 00:31:18.762 ************************************ 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:18.762 ************************************ 00:31:18.762 START TEST nvmf_lvol 00:31:18.762 ************************************ 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:31:18.762 * Looking for test storage... 00:31:18.762 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lcov --version 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:18.762 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:18.762 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.762 --rc genhtml_branch_coverage=1 00:31:18.762 --rc genhtml_function_coverage=1 00:31:18.762 --rc genhtml_legend=1 00:31:18.763 --rc geninfo_all_blocks=1 00:31:18.763 --rc geninfo_unexecuted_blocks=1 00:31:18.763 00:31:18.763 ' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:18.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.763 --rc genhtml_branch_coverage=1 00:31:18.763 --rc genhtml_function_coverage=1 00:31:18.763 --rc genhtml_legend=1 00:31:18.763 --rc geninfo_all_blocks=1 00:31:18.763 --rc geninfo_unexecuted_blocks=1 00:31:18.763 00:31:18.763 ' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:18.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.763 --rc genhtml_branch_coverage=1 00:31:18.763 --rc genhtml_function_coverage=1 00:31:18.763 --rc genhtml_legend=1 00:31:18.763 --rc geninfo_all_blocks=1 00:31:18.763 --rc geninfo_unexecuted_blocks=1 00:31:18.763 00:31:18.763 ' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:18.763 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:18.763 --rc genhtml_branch_coverage=1 00:31:18.763 --rc genhtml_function_coverage=1 00:31:18.763 --rc genhtml_legend=1 00:31:18.763 --rc geninfo_all_blocks=1 00:31:18.763 --rc geninfo_unexecuted_blocks=1 00:31:18.763 00:31:18.763 ' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:31:18.763 06:41:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:20.663 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.663 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:31:20.663 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:20.663 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:20.663 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:20.664 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:20.664 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:20.664 Found net devices under 0000:09:00.0: cvl_0_0 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:20.664 Found net devices under 0000:09:00.1: cvl_0_1 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:20.664 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:20.923 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.923 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:31:20.923 00:31:20.923 --- 10.0.0.2 ping statistics --- 00:31:20.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.923 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:20.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.067 ms 00:31:20.923 00:31:20.923 --- 10.0.0.1 ping statistics --- 00:31:20.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.923 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:20.923 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=2224640 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 2224640 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@833 -- # '[' -z 2224640 ']' 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:20.924 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:20.924 [2024-11-20 06:41:52.592285] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.924 [2024-11-20 06:41:52.593373] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:31:20.924 [2024-11-20 06:41:52.593448] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.924 [2024-11-20 06:41:52.664013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:20.924 [2024-11-20 06:41:52.718203] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.924 [2024-11-20 06:41:52.718259] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.924 [2024-11-20 06:41:52.718286] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.924 [2024-11-20 06:41:52.718297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.924 [2024-11-20 06:41:52.718315] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.924 [2024-11-20 06:41:52.719797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.924 [2024-11-20 06:41:52.719917] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.924 [2024-11-20 06:41:52.719921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:21.182 [2024-11-20 06:41:52.808820] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:21.182 [2024-11-20 06:41:52.809002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:21.182 [2024-11-20 06:41:52.809007] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:21.182 [2024-11-20 06:41:52.809259] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:21.182 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:21.182 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@866 -- # return 0 00:31:21.182 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:21.182 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:21.182 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:21.182 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:21.182 06:41:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:21.440 [2024-11-20 06:41:53.100635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:21.440 06:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:21.699 06:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:31:21.699 06:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:21.958 06:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:31:21.958 06:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:31:22.216 06:41:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:31:22.474 06:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=11404941-f40b-44fb-b537-0e3f2df635f3 00:31:22.474 06:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 11404941-f40b-44fb-b537-0e3f2df635f3 lvol 20 00:31:22.732 06:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=c4407936-d626-46af-be5c-112e8a35adac 00:31:22.732 06:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:23.296 06:41:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c4407936-d626-46af-be5c-112e8a35adac 00:31:23.296 06:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:23.555 [2024-11-20 06:41:55.324831] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.555 06:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:23.813 06:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2225059 00:31:23.813 06:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:31:23.813 06:41:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:31:25.185 06:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot c4407936-d626-46af-be5c-112e8a35adac MY_SNAPSHOT 00:31:25.186 06:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=87703157-8365-4bb7-90db-9aab3c7312b5 00:31:25.186 06:41:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize c4407936-d626-46af-be5c-112e8a35adac 30 00:31:25.751 06:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 87703157-8365-4bb7-90db-9aab3c7312b5 MY_CLONE 00:31:25.751 06:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=7fa546ed-9ad8-4dce-8d04-60818033aca6 00:31:25.751 06:41:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 7fa546ed-9ad8-4dce-8d04-60818033aca6 00:31:26.684 06:41:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2225059 00:31:34.793 Initializing NVMe Controllers 00:31:34.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:31:34.793 Controller IO queue size 128, less than required. 00:31:34.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:34.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:31:34.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:31:34.793 Initialization complete. Launching workers. 00:31:34.793 ======================================================== 00:31:34.793 Latency(us) 00:31:34.793 Device Information : IOPS MiB/s Average min max 00:31:34.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10159.90 39.69 12599.03 473.58 103584.59 00:31:34.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10389.00 40.58 12322.76 5845.52 56195.26 00:31:34.793 ======================================================== 00:31:34.793 Total : 20548.90 80.27 12459.36 473.58 103584.59 00:31:34.793 00:31:34.793 06:42:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:34.793 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c4407936-d626-46af-be5c-112e8a35adac 00:31:34.793 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 11404941-f40b-44fb-b537-0e3f2df635f3 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:35.051 rmmod nvme_tcp 00:31:35.051 rmmod nvme_fabrics 00:31:35.051 rmmod nvme_keyring 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 2224640 ']' 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 2224640 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@952 -- # '[' -z 2224640 ']' 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@956 -- # kill -0 2224640 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # uname 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2224640 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2224640' 00:31:35.051 killing process with pid 2224640 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@971 -- # kill 2224640 00:31:35.051 06:42:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@976 -- # wait 2224640 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:35.309 06:42:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:37.843 00:31:37.843 real 0m19.103s 00:31:37.843 user 0m55.935s 00:31:37.843 sys 0m7.803s 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:31:37.843 ************************************ 00:31:37.843 END TEST nvmf_lvol 00:31:37.843 ************************************ 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:37.843 ************************************ 00:31:37.843 START TEST nvmf_lvs_grow 00:31:37.843 ************************************ 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:31:37.843 * Looking for test storage... 00:31:37.843 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lcov --version 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:37.843 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:31:37.843 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.843 --rc genhtml_branch_coverage=1 00:31:37.843 --rc genhtml_function_coverage=1 00:31:37.843 --rc genhtml_legend=1 00:31:37.843 --rc geninfo_all_blocks=1 00:31:37.843 --rc geninfo_unexecuted_blocks=1 00:31:37.843 00:31:37.843 ' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:31:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.844 --rc genhtml_branch_coverage=1 00:31:37.844 --rc genhtml_function_coverage=1 00:31:37.844 --rc genhtml_legend=1 00:31:37.844 --rc geninfo_all_blocks=1 00:31:37.844 --rc geninfo_unexecuted_blocks=1 00:31:37.844 00:31:37.844 ' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:31:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.844 --rc genhtml_branch_coverage=1 00:31:37.844 --rc genhtml_function_coverage=1 00:31:37.844 --rc genhtml_legend=1 00:31:37.844 --rc geninfo_all_blocks=1 00:31:37.844 --rc geninfo_unexecuted_blocks=1 00:31:37.844 00:31:37.844 ' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:31:37.844 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:37.844 --rc genhtml_branch_coverage=1 00:31:37.844 --rc genhtml_function_coverage=1 00:31:37.844 --rc genhtml_legend=1 00:31:37.844 --rc geninfo_all_blocks=1 00:31:37.844 --rc geninfo_unexecuted_blocks=1 00:31:37.844 00:31:37.844 ' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:31:37.844 06:42:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.741 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:31:39.742 Found 0000:09:00.0 (0x8086 - 0x159b) 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:31:39.742 Found 0000:09:00.1 (0x8086 - 0x159b) 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:31:39.742 Found net devices under 0000:09:00.0: cvl_0_0 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:31:39.742 Found net devices under 0000:09:00.1: cvl_0_1 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:39.742 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:39.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.743 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.167 ms 00:31:39.743 00:31:39.743 --- 10.0.0.2 ping statistics --- 00:31:39.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.743 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:39.743 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.743 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:31:39.743 00:31:39.743 --- 10.0.0.1 ping statistics --- 00:31:39.743 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.743 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:31:39.743 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:40.000 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=2228925 00:31:40.000 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:31:40.000 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 2228925 00:31:40.000 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # '[' -z 2228925 ']' 00:31:40.000 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.000 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:40.000 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.000 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:40.000 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:40.000 [2024-11-20 06:42:11.624437] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:40.000 [2024-11-20 06:42:11.625569] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:31:40.000 [2024-11-20 06:42:11.625649] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.000 [2024-11-20 06:42:11.696540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:40.000 [2024-11-20 06:42:11.749175] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.000 [2024-11-20 06:42:11.749230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.000 [2024-11-20 06:42:11.749257] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.000 [2024-11-20 06:42:11.749268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.000 [2024-11-20 06:42:11.749278] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.000 [2024-11-20 06:42:11.749848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.258 [2024-11-20 06:42:11.840686] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:40.259 [2024-11-20 06:42:11.840992] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:40.259 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:40.259 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@866 -- # return 0 00:31:40.259 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:31:40.259 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:31:40.259 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:40.259 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:40.259 06:42:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:40.516 [2024-11-20 06:42:12.138478] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:40.516 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:31:40.516 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:31:40.516 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:40.516 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:40.516 ************************************ 00:31:40.516 START TEST lvs_grow_clean 00:31:40.516 ************************************ 00:31:40.516 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1127 -- # lvs_grow 00:31:40.516 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:40.516 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:40.516 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:40.516 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:40.516 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:40.517 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:40.517 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:40.517 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:40.517 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:40.776 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:40.776 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:41.037 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:41.037 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:41.037 06:42:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:41.295 06:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:41.295 06:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:41.295 06:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 6f00351e-0159-4fbc-9c77-bc1444c207cf lvol 150 00:31:41.555 06:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f36f5347-2fa7-4ec9-9a07-7d869b3c95dc 00:31:41.555 06:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:41.555 06:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:41.814 [2024-11-20 06:42:13.602370] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:41.814 [2024-11-20 06:42:13.602470] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:41.814 true 00:31:41.814 06:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:41.814 06:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:31:42.073 06:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:31:42.074 06:42:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:31:42.332 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f36f5347-2fa7-4ec9-9a07-7d869b3c95dc 00:31:42.897 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:42.897 [2024-11-20 06:42:14.690656] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.897 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:43.155 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2229367 00:31:43.155 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:31:43.155 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:43.155 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2229367 /var/tmp/bdevperf.sock 00:31:43.155 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # '[' -z 2229367 ']' 00:31:43.155 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:43.155 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # local max_retries=100 00:31:43.155 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:43.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:43.155 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # xtrace_disable 00:31:43.155 06:42:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:43.415 [2024-11-20 06:42:15.030010] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:31:43.415 [2024-11-20 06:42:15.030098] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2229367 ] 00:31:43.415 [2024-11-20 06:42:15.097173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:43.415 [2024-11-20 06:42:15.158373] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:43.676 06:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:31:43.676 06:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@866 -- # return 0 00:31:43.676 06:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:31:43.935 Nvme0n1 00:31:43.935 06:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:31:44.193 [ 00:31:44.193 { 00:31:44.193 "name": "Nvme0n1", 00:31:44.193 "aliases": [ 00:31:44.193 "f36f5347-2fa7-4ec9-9a07-7d869b3c95dc" 00:31:44.193 ], 00:31:44.193 "product_name": "NVMe disk", 00:31:44.193 "block_size": 4096, 00:31:44.193 "num_blocks": 38912, 00:31:44.193 "uuid": "f36f5347-2fa7-4ec9-9a07-7d869b3c95dc", 00:31:44.193 "numa_id": 0, 00:31:44.193 "assigned_rate_limits": { 00:31:44.193 "rw_ios_per_sec": 0, 00:31:44.193 "rw_mbytes_per_sec": 0, 00:31:44.193 "r_mbytes_per_sec": 0, 00:31:44.193 "w_mbytes_per_sec": 0 00:31:44.193 }, 00:31:44.193 "claimed": false, 00:31:44.193 "zoned": false, 00:31:44.193 "supported_io_types": { 00:31:44.193 "read": true, 00:31:44.193 "write": true, 00:31:44.193 "unmap": true, 00:31:44.193 "flush": true, 00:31:44.193 "reset": true, 00:31:44.193 "nvme_admin": true, 00:31:44.193 "nvme_io": true, 00:31:44.193 "nvme_io_md": false, 00:31:44.193 "write_zeroes": true, 00:31:44.193 "zcopy": false, 00:31:44.193 "get_zone_info": false, 00:31:44.193 "zone_management": false, 00:31:44.193 "zone_append": false, 00:31:44.193 "compare": true, 00:31:44.193 "compare_and_write": true, 00:31:44.193 "abort": true, 00:31:44.193 "seek_hole": false, 00:31:44.193 "seek_data": false, 00:31:44.193 "copy": true, 00:31:44.193 "nvme_iov_md": false 00:31:44.193 }, 00:31:44.193 "memory_domains": [ 00:31:44.193 { 00:31:44.193 "dma_device_id": "system", 00:31:44.193 "dma_device_type": 1 00:31:44.193 } 00:31:44.193 ], 00:31:44.193 "driver_specific": { 00:31:44.193 "nvme": [ 00:31:44.193 { 00:31:44.193 "trid": { 00:31:44.193 "trtype": "TCP", 00:31:44.193 "adrfam": "IPv4", 00:31:44.193 "traddr": "10.0.0.2", 00:31:44.193 "trsvcid": "4420", 00:31:44.193 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:31:44.193 }, 00:31:44.193 "ctrlr_data": { 00:31:44.193 "cntlid": 1, 00:31:44.193 "vendor_id": "0x8086", 00:31:44.193 "model_number": "SPDK bdev Controller", 00:31:44.193 "serial_number": "SPDK0", 00:31:44.193 "firmware_revision": "25.01", 00:31:44.193 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:44.193 "oacs": { 00:31:44.194 "security": 0, 00:31:44.194 "format": 0, 00:31:44.194 "firmware": 0, 00:31:44.194 "ns_manage": 0 00:31:44.194 }, 00:31:44.194 "multi_ctrlr": true, 00:31:44.194 "ana_reporting": false 00:31:44.194 }, 00:31:44.194 "vs": { 00:31:44.194 "nvme_version": "1.3" 00:31:44.194 }, 00:31:44.194 "ns_data": { 00:31:44.194 "id": 1, 00:31:44.194 "can_share": true 00:31:44.194 } 00:31:44.194 } 00:31:44.194 ], 00:31:44.194 "mp_policy": "active_passive" 00:31:44.194 } 00:31:44.194 } 00:31:44.194 ] 00:31:44.194 06:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2229432 00:31:44.194 06:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:44.194 06:42:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:31:44.194 Running I/O for 10 seconds... 00:31:45.568 Latency(us) 00:31:45.568 [2024-11-20T05:42:17.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:45.568 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:45.568 Nvme0n1 : 1.00 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:31:45.568 [2024-11-20T05:42:17.404Z] =================================================================================================================== 00:31:45.568 [2024-11-20T05:42:17.404Z] Total : 14732.00 57.55 0.00 0.00 0.00 0.00 0.00 00:31:45.568 00:31:46.139 06:42:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:46.397 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:46.397 Nvme0n1 : 2.00 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:31:46.397 [2024-11-20T05:42:18.233Z] =================================================================================================================== 00:31:46.397 [2024-11-20T05:42:18.233Z] Total : 14795.50 57.79 0.00 0.00 0.00 0.00 0.00 00:31:46.397 00:31:46.397 true 00:31:46.398 06:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:46.398 06:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:31:46.965 06:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:31:46.965 06:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:31:46.965 06:42:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2229432 00:31:47.225 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:47.225 Nvme0n1 : 3.00 14901.33 58.21 0.00 0.00 0.00 0.00 0.00 00:31:47.225 [2024-11-20T05:42:19.061Z] =================================================================================================================== 00:31:47.225 [2024-11-20T05:42:19.061Z] Total : 14901.33 58.21 0.00 0.00 0.00 0.00 0.00 00:31:47.225 00:31:48.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:48.604 Nvme0n1 : 4.00 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:31:48.604 [2024-11-20T05:42:20.440Z] =================================================================================================================== 00:31:48.604 [2024-11-20T05:42:20.440Z] Total : 14986.00 58.54 0.00 0.00 0.00 0.00 0.00 00:31:48.604 00:31:49.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:49.544 Nvme0n1 : 5.00 15011.40 58.64 0.00 0.00 0.00 0.00 0.00 00:31:49.544 [2024-11-20T05:42:21.380Z] =================================================================================================================== 00:31:49.544 [2024-11-20T05:42:21.380Z] Total : 15011.40 58.64 0.00 0.00 0.00 0.00 0.00 00:31:49.544 00:31:50.527 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:50.527 Nvme0n1 : 6.00 15091.83 58.95 0.00 0.00 0.00 0.00 0.00 00:31:50.527 [2024-11-20T05:42:22.363Z] =================================================================================================================== 00:31:50.527 [2024-11-20T05:42:22.363Z] Total : 15091.83 58.95 0.00 0.00 0.00 0.00 0.00 00:31:50.527 00:31:51.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:51.487 Nvme0n1 : 7.00 15149.29 59.18 0.00 0.00 0.00 0.00 0.00 00:31:51.487 [2024-11-20T05:42:23.323Z] =================================================================================================================== 00:31:51.487 [2024-11-20T05:42:23.323Z] Total : 15149.29 59.18 0.00 0.00 0.00 0.00 0.00 00:31:51.487 00:31:52.426 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:52.426 Nvme0n1 : 8.00 15192.38 59.35 0.00 0.00 0.00 0.00 0.00 00:31:52.426 [2024-11-20T05:42:24.262Z] =================================================================================================================== 00:31:52.426 [2024-11-20T05:42:24.262Z] Total : 15192.38 59.35 0.00 0.00 0.00 0.00 0.00 00:31:52.426 00:31:53.363 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:53.363 Nvme0n1 : 9.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:31:53.363 [2024-11-20T05:42:25.199Z] =================================================================================================================== 00:31:53.363 [2024-11-20T05:42:25.199Z] Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:31:53.363 00:31:54.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.301 Nvme0n1 : 10.00 15265.40 59.63 0.00 0.00 0.00 0.00 0.00 00:31:54.301 [2024-11-20T05:42:26.137Z] =================================================================================================================== 00:31:54.301 [2024-11-20T05:42:26.137Z] Total : 15265.40 59.63 0.00 0.00 0.00 0.00 0.00 00:31:54.301 00:31:54.301 00:31:54.301 Latency(us) 00:31:54.301 [2024-11-20T05:42:26.137Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:31:54.301 Nvme0n1 : 10.01 15268.36 59.64 0.00 0.00 8378.69 7573.05 19126.80 00:31:54.301 [2024-11-20T05:42:26.137Z] =================================================================================================================== 00:31:54.301 [2024-11-20T05:42:26.137Z] Total : 15268.36 59.64 0.00 0.00 8378.69 7573.05 19126.80 00:31:54.301 { 00:31:54.301 "results": [ 00:31:54.301 { 00:31:54.301 "job": "Nvme0n1", 00:31:54.301 "core_mask": "0x2", 00:31:54.301 "workload": "randwrite", 00:31:54.301 "status": "finished", 00:31:54.301 "queue_depth": 128, 00:31:54.302 "io_size": 4096, 00:31:54.302 "runtime": 10.006446, 00:31:54.302 "iops": 15268.358016422613, 00:31:54.302 "mibps": 59.64202350165083, 00:31:54.302 "io_failed": 0, 00:31:54.302 "io_timeout": 0, 00:31:54.302 "avg_latency_us": 8378.691096459395, 00:31:54.302 "min_latency_us": 7573.0488888888885, 00:31:54.302 "max_latency_us": 19126.802962962964 00:31:54.302 } 00:31:54.302 ], 00:31:54.302 "core_count": 1 00:31:54.302 } 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2229367 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # '[' -z 2229367 ']' 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # kill -0 2229367 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # uname 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2229367 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2229367' 00:31:54.302 killing process with pid 2229367 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@971 -- # kill 2229367 00:31:54.302 Received shutdown signal, test time was about 10.000000 seconds 00:31:54.302 00:31:54.302 Latency(us) 00:31:54.302 [2024-11-20T05:42:26.138Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:54.302 [2024-11-20T05:42:26.138Z] =================================================================================================================== 00:31:54.302 [2024-11-20T05:42:26.138Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:54.302 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@976 -- # wait 2229367 00:31:54.560 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:54.818 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:55.388 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:55.388 06:42:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:31:55.650 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:31:55.650 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:31:55.650 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:55.910 [2024-11-20 06:42:27.562419] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:31:55.910 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:56.168 request: 00:31:56.169 { 00:31:56.169 "uuid": "6f00351e-0159-4fbc-9c77-bc1444c207cf", 00:31:56.169 "method": "bdev_lvol_get_lvstores", 00:31:56.169 "req_id": 1 00:31:56.169 } 00:31:56.169 Got JSON-RPC error response 00:31:56.169 response: 00:31:56.169 { 00:31:56.169 "code": -19, 00:31:56.169 "message": "No such device" 00:31:56.169 } 00:31:56.169 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:31:56.169 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:31:56.169 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:31:56.169 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:31:56.169 06:42:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:56.426 aio_bdev 00:31:56.426 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f36f5347-2fa7-4ec9-9a07-7d869b3c95dc 00:31:56.426 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local bdev_name=f36f5347-2fa7-4ec9-9a07-7d869b3c95dc 00:31:56.426 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:31:56.426 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local i 00:31:56.426 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:31:56.426 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:31:56.426 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:31:56.683 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f36f5347-2fa7-4ec9-9a07-7d869b3c95dc -t 2000 00:31:56.941 [ 00:31:56.941 { 00:31:56.941 "name": "f36f5347-2fa7-4ec9-9a07-7d869b3c95dc", 00:31:56.941 "aliases": [ 00:31:56.941 "lvs/lvol" 00:31:56.941 ], 00:31:56.941 "product_name": "Logical Volume", 00:31:56.941 "block_size": 4096, 00:31:56.941 "num_blocks": 38912, 00:31:56.941 "uuid": "f36f5347-2fa7-4ec9-9a07-7d869b3c95dc", 00:31:56.941 "assigned_rate_limits": { 00:31:56.941 "rw_ios_per_sec": 0, 00:31:56.941 "rw_mbytes_per_sec": 0, 00:31:56.941 "r_mbytes_per_sec": 0, 00:31:56.941 "w_mbytes_per_sec": 0 00:31:56.941 }, 00:31:56.941 "claimed": false, 00:31:56.941 "zoned": false, 00:31:56.941 "supported_io_types": { 00:31:56.941 "read": true, 00:31:56.941 "write": true, 00:31:56.941 "unmap": true, 00:31:56.941 "flush": false, 00:31:56.941 "reset": true, 00:31:56.941 "nvme_admin": false, 00:31:56.941 "nvme_io": false, 00:31:56.941 "nvme_io_md": false, 00:31:56.941 "write_zeroes": true, 00:31:56.941 "zcopy": false, 00:31:56.941 "get_zone_info": false, 00:31:56.941 "zone_management": false, 00:31:56.941 "zone_append": false, 00:31:56.941 "compare": false, 00:31:56.941 "compare_and_write": false, 00:31:56.941 "abort": false, 00:31:56.941 "seek_hole": true, 00:31:56.941 "seek_data": true, 00:31:56.941 "copy": false, 00:31:56.941 "nvme_iov_md": false 00:31:56.941 }, 00:31:56.941 "driver_specific": { 00:31:56.941 "lvol": { 00:31:56.941 "lvol_store_uuid": "6f00351e-0159-4fbc-9c77-bc1444c207cf", 00:31:56.941 "base_bdev": "aio_bdev", 00:31:56.941 "thin_provision": false, 00:31:56.941 "num_allocated_clusters": 38, 00:31:56.941 "snapshot": false, 00:31:56.941 "clone": false, 00:31:56.941 "esnap_clone": false 00:31:56.941 } 00:31:56.941 } 00:31:56.941 } 00:31:56.941 ] 00:31:56.941 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@909 -- # return 0 00:31:56.941 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:56.941 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:31:57.201 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:31:57.201 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:31:57.201 06:42:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:57.461 06:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:31:57.461 06:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f36f5347-2fa7-4ec9-9a07-7d869b3c95dc 00:31:57.718 06:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 6f00351e-0159-4fbc-9c77-bc1444c207cf 00:31:57.979 06:42:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:58.547 00:31:58.547 real 0m17.930s 00:31:58.547 user 0m17.514s 00:31:58.547 sys 0m1.874s 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1128 -- # xtrace_disable 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:31:58.547 ************************************ 00:31:58.547 END TEST lvs_grow_clean 00:31:58.547 ************************************ 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1109 -- # xtrace_disable 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:31:58.547 ************************************ 00:31:58.547 START TEST lvs_grow_dirty 00:31:58.547 ************************************ 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1127 -- # lvs_grow dirty 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:58.547 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:31:58.807 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:31:58.807 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:31:59.087 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:31:59.087 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:31:59.087 06:42:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:31:59.347 06:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:31:59.347 06:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:31:59.347 06:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 lvol 150 00:31:59.606 06:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=6320d1bb-c4cf-4098-b689-793346f41577 00:31:59.606 06:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:31:59.606 06:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:31:59.866 [2024-11-20 06:42:31.610354] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:31:59.866 [2024-11-20 06:42:31.610456] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:31:59.866 true 00:31:59.866 06:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:31:59.866 06:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:00.124 06:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:00.124 06:42:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:00.381 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6320d1bb-c4cf-4098-b689-793346f41577 00:32:00.639 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:00.895 [2024-11-20 06:42:32.710590] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.895 06:42:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:01.463 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2231441 00:32:01.463 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:01.463 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:01.463 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2231441 /var/tmp/bdevperf.sock 00:32:01.463 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2231441 ']' 00:32:01.463 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:01.463 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:01.463 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:01.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:01.463 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:01.463 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:01.463 [2024-11-20 06:42:33.047698] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:01.463 [2024-11-20 06:42:33.047785] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2231441 ] 00:32:01.463 [2024-11-20 06:42:33.116914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:01.463 [2024-11-20 06:42:33.178658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:01.723 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:01.723 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:01.723 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:01.981 Nvme0n1 00:32:01.981 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:02.239 [ 00:32:02.239 { 00:32:02.239 "name": "Nvme0n1", 00:32:02.239 "aliases": [ 00:32:02.239 "6320d1bb-c4cf-4098-b689-793346f41577" 00:32:02.239 ], 00:32:02.239 "product_name": "NVMe disk", 00:32:02.239 "block_size": 4096, 00:32:02.239 "num_blocks": 38912, 00:32:02.239 "uuid": "6320d1bb-c4cf-4098-b689-793346f41577", 00:32:02.239 "numa_id": 0, 00:32:02.239 "assigned_rate_limits": { 00:32:02.239 "rw_ios_per_sec": 0, 00:32:02.239 "rw_mbytes_per_sec": 0, 00:32:02.239 "r_mbytes_per_sec": 0, 00:32:02.239 "w_mbytes_per_sec": 0 00:32:02.239 }, 00:32:02.239 "claimed": false, 00:32:02.239 "zoned": false, 00:32:02.239 "supported_io_types": { 00:32:02.239 "read": true, 00:32:02.239 "write": true, 00:32:02.239 "unmap": true, 00:32:02.239 "flush": true, 00:32:02.239 "reset": true, 00:32:02.239 "nvme_admin": true, 00:32:02.239 "nvme_io": true, 00:32:02.239 "nvme_io_md": false, 00:32:02.239 "write_zeroes": true, 00:32:02.239 "zcopy": false, 00:32:02.239 "get_zone_info": false, 00:32:02.239 "zone_management": false, 00:32:02.239 "zone_append": false, 00:32:02.239 "compare": true, 00:32:02.239 "compare_and_write": true, 00:32:02.239 "abort": true, 00:32:02.239 "seek_hole": false, 00:32:02.239 "seek_data": false, 00:32:02.239 "copy": true, 00:32:02.239 "nvme_iov_md": false 00:32:02.239 }, 00:32:02.239 "memory_domains": [ 00:32:02.239 { 00:32:02.239 "dma_device_id": "system", 00:32:02.239 "dma_device_type": 1 00:32:02.239 } 00:32:02.239 ], 00:32:02.239 "driver_specific": { 00:32:02.239 "nvme": [ 00:32:02.239 { 00:32:02.239 "trid": { 00:32:02.239 "trtype": "TCP", 00:32:02.239 "adrfam": "IPv4", 00:32:02.239 "traddr": "10.0.0.2", 00:32:02.239 "trsvcid": "4420", 00:32:02.239 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:02.239 }, 00:32:02.239 "ctrlr_data": { 00:32:02.239 "cntlid": 1, 00:32:02.240 "vendor_id": "0x8086", 00:32:02.240 "model_number": "SPDK bdev Controller", 00:32:02.240 "serial_number": "SPDK0", 00:32:02.240 "firmware_revision": "25.01", 00:32:02.240 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:02.240 "oacs": { 00:32:02.240 "security": 0, 00:32:02.240 "format": 0, 00:32:02.240 "firmware": 0, 00:32:02.240 "ns_manage": 0 00:32:02.240 }, 00:32:02.240 "multi_ctrlr": true, 00:32:02.240 "ana_reporting": false 00:32:02.240 }, 00:32:02.240 "vs": { 00:32:02.240 "nvme_version": "1.3" 00:32:02.240 }, 00:32:02.240 "ns_data": { 00:32:02.240 "id": 1, 00:32:02.240 "can_share": true 00:32:02.240 } 00:32:02.240 } 00:32:02.240 ], 00:32:02.240 "mp_policy": "active_passive" 00:32:02.240 } 00:32:02.240 } 00:32:02.240 ] 00:32:02.240 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2231544 00:32:02.240 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:02.240 06:42:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:02.240 Running I/O for 10 seconds... 00:32:03.617 Latency(us) 00:32:03.617 [2024-11-20T05:42:35.453Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:03.617 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:03.617 Nvme0n1 : 1.00 14893.00 58.18 0.00 0.00 0.00 0.00 0.00 00:32:03.617 [2024-11-20T05:42:35.453Z] =================================================================================================================== 00:32:03.617 [2024-11-20T05:42:35.453Z] Total : 14893.00 58.18 0.00 0.00 0.00 0.00 0.00 00:32:03.617 00:32:04.186 06:42:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:04.447 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:04.447 Nvme0n1 : 2.00 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:32:04.447 [2024-11-20T05:42:36.283Z] =================================================================================================================== 00:32:04.447 [2024-11-20T05:42:36.283Z] Total : 15003.00 58.61 0.00 0.00 0.00 0.00 0.00 00:32:04.447 00:32:04.705 true 00:32:04.705 06:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:04.705 06:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:04.963 06:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:04.963 06:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:04.963 06:42:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2231544 00:32:05.533 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:05.533 Nvme0n1 : 3.00 15039.67 58.75 0.00 0.00 0.00 0.00 0.00 00:32:05.533 [2024-11-20T05:42:37.369Z] =================================================================================================================== 00:32:05.533 [2024-11-20T05:42:37.369Z] Total : 15039.67 58.75 0.00 0.00 0.00 0.00 0.00 00:32:05.533 00:32:06.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:06.470 Nvme0n1 : 4.00 15121.50 59.07 0.00 0.00 0.00 0.00 0.00 00:32:06.470 [2024-11-20T05:42:38.306Z] =================================================================================================================== 00:32:06.470 [2024-11-20T05:42:38.306Z] Total : 15121.50 59.07 0.00 0.00 0.00 0.00 0.00 00:32:06.470 00:32:07.413 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:07.413 Nvme0n1 : 5.00 15145.20 59.16 0.00 0.00 0.00 0.00 0.00 00:32:07.413 [2024-11-20T05:42:39.249Z] =================================================================================================================== 00:32:07.413 [2024-11-20T05:42:39.249Z] Total : 15145.20 59.16 0.00 0.00 0.00 0.00 0.00 00:32:07.413 00:32:08.352 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:08.352 Nvme0n1 : 6.00 15209.00 59.41 0.00 0.00 0.00 0.00 0.00 00:32:08.352 [2024-11-20T05:42:40.188Z] =================================================================================================================== 00:32:08.352 [2024-11-20T05:42:40.188Z] Total : 15209.00 59.41 0.00 0.00 0.00 0.00 0.00 00:32:08.352 00:32:09.284 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:09.284 Nvme0n1 : 7.00 15258.86 59.60 0.00 0.00 0.00 0.00 0.00 00:32:09.284 [2024-11-20T05:42:41.120Z] =================================================================================================================== 00:32:09.284 [2024-11-20T05:42:41.120Z] Total : 15258.86 59.60 0.00 0.00 0.00 0.00 0.00 00:32:09.284 00:32:10.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:10.665 Nvme0n1 : 8.00 15316.25 59.83 0.00 0.00 0.00 0.00 0.00 00:32:10.665 [2024-11-20T05:42:42.501Z] =================================================================================================================== 00:32:10.665 [2024-11-20T05:42:42.501Z] Total : 15316.25 59.83 0.00 0.00 0.00 0.00 0.00 00:32:10.665 00:32:11.604 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:11.604 Nvme0n1 : 9.00 15336.00 59.91 0.00 0.00 0.00 0.00 0.00 00:32:11.604 [2024-11-20T05:42:43.440Z] =================================================================================================================== 00:32:11.604 [2024-11-20T05:42:43.440Z] Total : 15336.00 59.91 0.00 0.00 0.00 0.00 0.00 00:32:11.604 00:32:12.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.539 Nvme0n1 : 10.00 15367.90 60.03 0.00 0.00 0.00 0.00 0.00 00:32:12.539 [2024-11-20T05:42:44.375Z] =================================================================================================================== 00:32:12.539 [2024-11-20T05:42:44.375Z] Total : 15367.90 60.03 0.00 0.00 0.00 0.00 0.00 00:32:12.539 00:32:12.539 00:32:12.539 Latency(us) 00:32:12.539 [2024-11-20T05:42:44.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.539 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:12.539 Nvme0n1 : 10.00 15371.83 60.05 0.00 0.00 8322.11 4271.98 18447.17 00:32:12.539 [2024-11-20T05:42:44.375Z] =================================================================================================================== 00:32:12.539 [2024-11-20T05:42:44.375Z] Total : 15371.83 60.05 0.00 0.00 8322.11 4271.98 18447.17 00:32:12.539 { 00:32:12.539 "results": [ 00:32:12.539 { 00:32:12.539 "job": "Nvme0n1", 00:32:12.539 "core_mask": "0x2", 00:32:12.539 "workload": "randwrite", 00:32:12.539 "status": "finished", 00:32:12.539 "queue_depth": 128, 00:32:12.539 "io_size": 4096, 00:32:12.539 "runtime": 10.003556, 00:32:12.539 "iops": 15371.833775909286, 00:32:12.539 "mibps": 60.04622568714565, 00:32:12.539 "io_failed": 0, 00:32:12.539 "io_timeout": 0, 00:32:12.539 "avg_latency_us": 8322.105629659494, 00:32:12.539 "min_latency_us": 4271.976296296296, 00:32:12.539 "max_latency_us": 18447.17037037037 00:32:12.539 } 00:32:12.539 ], 00:32:12.539 "core_count": 1 00:32:12.539 } 00:32:12.539 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2231441 00:32:12.539 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # '[' -z 2231441 ']' 00:32:12.539 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # kill -0 2231441 00:32:12.539 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # uname 00:32:12.539 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:12.539 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2231441 00:32:12.539 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:12.539 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:12.539 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2231441' 00:32:12.539 killing process with pid 2231441 00:32:12.539 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@971 -- # kill 2231441 00:32:12.539 Received shutdown signal, test time was about 10.000000 seconds 00:32:12.539 00:32:12.539 Latency(us) 00:32:12.539 [2024-11-20T05:42:44.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:12.540 [2024-11-20T05:42:44.376Z] =================================================================================================================== 00:32:12.540 [2024-11-20T05:42:44.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:12.540 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@976 -- # wait 2231441 00:32:12.540 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:13.106 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:13.106 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:13.106 06:42:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:13.365 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:13.365 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:32:13.365 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2228925 00:32:13.365 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2228925 00:32:13.625 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2228925 Killed "${NVMF_APP[@]}" "$@" 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=2232861 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 2232861 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # '[' -z 2232861 ']' 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:13.625 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:13.625 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:13.625 [2024-11-20 06:42:45.278026] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:13.625 [2024-11-20 06:42:45.279155] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:13.626 [2024-11-20 06:42:45.279221] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:13.626 [2024-11-20 06:42:45.354019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:13.626 [2024-11-20 06:42:45.412080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:13.626 [2024-11-20 06:42:45.412135] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:13.626 [2024-11-20 06:42:45.412162] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:13.626 [2024-11-20 06:42:45.412173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:13.626 [2024-11-20 06:42:45.412183] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:13.626 [2024-11-20 06:42:45.412769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:13.885 [2024-11-20 06:42:45.513151] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:13.885 [2024-11-20 06:42:45.513482] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:13.885 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:13.885 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@866 -- # return 0 00:32:13.885 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:13.885 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:13.885 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:13.885 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:13.885 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:14.143 [2024-11-20 06:42:45.839427] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:32:14.143 [2024-11-20 06:42:45.839563] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:32:14.143 [2024-11-20 06:42:45.839641] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:32:14.143 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:32:14.143 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 6320d1bb-c4cf-4098-b689-793346f41577 00:32:14.143 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=6320d1bb-c4cf-4098-b689-793346f41577 00:32:14.143 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:14.143 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:14.143 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:14.143 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:14.143 06:42:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:14.400 06:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6320d1bb-c4cf-4098-b689-793346f41577 -t 2000 00:32:14.658 [ 00:32:14.658 { 00:32:14.659 "name": "6320d1bb-c4cf-4098-b689-793346f41577", 00:32:14.659 "aliases": [ 00:32:14.659 "lvs/lvol" 00:32:14.659 ], 00:32:14.659 "product_name": "Logical Volume", 00:32:14.659 "block_size": 4096, 00:32:14.659 "num_blocks": 38912, 00:32:14.659 "uuid": "6320d1bb-c4cf-4098-b689-793346f41577", 00:32:14.659 "assigned_rate_limits": { 00:32:14.659 "rw_ios_per_sec": 0, 00:32:14.659 "rw_mbytes_per_sec": 0, 00:32:14.659 "r_mbytes_per_sec": 0, 00:32:14.659 "w_mbytes_per_sec": 0 00:32:14.659 }, 00:32:14.659 "claimed": false, 00:32:14.659 "zoned": false, 00:32:14.659 "supported_io_types": { 00:32:14.659 "read": true, 00:32:14.659 "write": true, 00:32:14.659 "unmap": true, 00:32:14.659 "flush": false, 00:32:14.659 "reset": true, 00:32:14.659 "nvme_admin": false, 00:32:14.659 "nvme_io": false, 00:32:14.659 "nvme_io_md": false, 00:32:14.659 "write_zeroes": true, 00:32:14.659 "zcopy": false, 00:32:14.659 "get_zone_info": false, 00:32:14.659 "zone_management": false, 00:32:14.659 "zone_append": false, 00:32:14.659 "compare": false, 00:32:14.659 "compare_and_write": false, 00:32:14.659 "abort": false, 00:32:14.659 "seek_hole": true, 00:32:14.659 "seek_data": true, 00:32:14.659 "copy": false, 00:32:14.659 "nvme_iov_md": false 00:32:14.659 }, 00:32:14.659 "driver_specific": { 00:32:14.659 "lvol": { 00:32:14.659 "lvol_store_uuid": "9f4aa888-29c8-465f-94e8-0a9e30d3e125", 00:32:14.659 "base_bdev": "aio_bdev", 00:32:14.659 "thin_provision": false, 00:32:14.659 "num_allocated_clusters": 38, 00:32:14.659 "snapshot": false, 00:32:14.659 "clone": false, 00:32:14.659 "esnap_clone": false 00:32:14.659 } 00:32:14.659 } 00:32:14.659 } 00:32:14.659 ] 00:32:14.659 06:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:14.659 06:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:14.659 06:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:32:14.917 06:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:32:14.917 06:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:14.917 06:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:32:15.175 06:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:32:15.175 06:42:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:15.434 [2024-11-20 06:42:47.217339] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:15.434 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:15.434 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:32:15.434 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:15.434 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:15.434 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.435 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:15.435 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.435 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:15.435 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:32:15.435 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:15.435 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:15.435 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:15.695 request: 00:32:15.695 { 00:32:15.695 "uuid": "9f4aa888-29c8-465f-94e8-0a9e30d3e125", 00:32:15.695 "method": "bdev_lvol_get_lvstores", 00:32:15.695 "req_id": 1 00:32:15.695 } 00:32:15.695 Got JSON-RPC error response 00:32:15.695 response: 00:32:15.695 { 00:32:15.695 "code": -19, 00:32:15.695 "message": "No such device" 00:32:15.695 } 00:32:15.695 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:32:15.695 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:32:15.695 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:32:15.695 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:32:15.695 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:15.955 aio_bdev 00:32:16.213 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 6320d1bb-c4cf-4098-b689-793346f41577 00:32:16.213 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local bdev_name=6320d1bb-c4cf-4098-b689-793346f41577 00:32:16.213 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # local bdev_timeout= 00:32:16.213 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local i 00:32:16.213 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # [[ -z '' ]] 00:32:16.213 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # bdev_timeout=2000 00:32:16.213 06:42:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:16.474 06:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 6320d1bb-c4cf-4098-b689-793346f41577 -t 2000 00:32:16.733 [ 00:32:16.733 { 00:32:16.733 "name": "6320d1bb-c4cf-4098-b689-793346f41577", 00:32:16.733 "aliases": [ 00:32:16.733 "lvs/lvol" 00:32:16.733 ], 00:32:16.733 "product_name": "Logical Volume", 00:32:16.733 "block_size": 4096, 00:32:16.733 "num_blocks": 38912, 00:32:16.733 "uuid": "6320d1bb-c4cf-4098-b689-793346f41577", 00:32:16.733 "assigned_rate_limits": { 00:32:16.733 "rw_ios_per_sec": 0, 00:32:16.733 "rw_mbytes_per_sec": 0, 00:32:16.733 "r_mbytes_per_sec": 0, 00:32:16.733 "w_mbytes_per_sec": 0 00:32:16.733 }, 00:32:16.733 "claimed": false, 00:32:16.733 "zoned": false, 00:32:16.733 "supported_io_types": { 00:32:16.733 "read": true, 00:32:16.733 "write": true, 00:32:16.733 "unmap": true, 00:32:16.733 "flush": false, 00:32:16.733 "reset": true, 00:32:16.733 "nvme_admin": false, 00:32:16.733 "nvme_io": false, 00:32:16.733 "nvme_io_md": false, 00:32:16.733 "write_zeroes": true, 00:32:16.733 "zcopy": false, 00:32:16.733 "get_zone_info": false, 00:32:16.733 "zone_management": false, 00:32:16.733 "zone_append": false, 00:32:16.733 "compare": false, 00:32:16.733 "compare_and_write": false, 00:32:16.733 "abort": false, 00:32:16.733 "seek_hole": true, 00:32:16.733 "seek_data": true, 00:32:16.733 "copy": false, 00:32:16.733 "nvme_iov_md": false 00:32:16.733 }, 00:32:16.733 "driver_specific": { 00:32:16.733 "lvol": { 00:32:16.733 "lvol_store_uuid": "9f4aa888-29c8-465f-94e8-0a9e30d3e125", 00:32:16.733 "base_bdev": "aio_bdev", 00:32:16.733 "thin_provision": false, 00:32:16.733 "num_allocated_clusters": 38, 00:32:16.733 "snapshot": false, 00:32:16.733 "clone": false, 00:32:16.733 "esnap_clone": false 00:32:16.733 } 00:32:16.733 } 00:32:16.733 } 00:32:16.733 ] 00:32:16.733 06:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@909 -- # return 0 00:32:16.733 06:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:16.733 06:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:16.993 06:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:16.993 06:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:16.993 06:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:17.252 06:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:17.252 06:42:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6320d1bb-c4cf-4098-b689-793346f41577 00:32:17.511 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9f4aa888-29c8-465f-94e8-0a9e30d3e125 00:32:17.771 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:18.031 00:32:18.031 real 0m19.592s 00:32:18.031 user 0m36.507s 00:32:18.031 sys 0m4.834s 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:18.031 ************************************ 00:32:18.031 END TEST lvs_grow_dirty 00:32:18.031 ************************************ 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # type=--id 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@811 -- # id=0 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # '[' --id = --pid ']' 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # shm_files=nvmf_trace.0 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # [[ -z nvmf_trace.0 ]] 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@822 -- # for n in $shm_files 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:32:18.031 nvmf_trace.0 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # return 0 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:18.031 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:18.031 rmmod nvme_tcp 00:32:18.031 rmmod nvme_fabrics 00:32:18.031 rmmod nvme_keyring 00:32:18.290 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:18.290 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:32:18.290 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 2232861 ']' 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 2232861 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # '[' -z 2232861 ']' 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # kill -0 2232861 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # uname 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2232861 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2232861' 00:32:18.291 killing process with pid 2232861 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@971 -- # kill 2232861 00:32:18.291 06:42:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@976 -- # wait 2232861 00:32:18.549 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:18.549 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:18.549 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:18.549 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:32:18.549 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:32:18.549 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:18.549 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:32:18.549 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:18.549 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:18.549 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:18.550 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:18.550 06:42:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.455 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:20.455 00:32:20.455 real 0m42.988s 00:32:20.455 user 0m55.842s 00:32:20.455 sys 0m8.665s 00:32:20.455 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:20.455 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:20.455 ************************************ 00:32:20.455 END TEST nvmf_lvs_grow 00:32:20.455 ************************************ 00:32:20.455 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:20.455 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:20.455 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:20.455 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:20.455 ************************************ 00:32:20.455 START TEST nvmf_bdev_io_wait 00:32:20.455 ************************************ 00:32:20.456 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:32:20.714 * Looking for test storage... 00:32:20.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:20.714 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:20.714 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lcov --version 00:32:20.714 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:20.714 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:20.714 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:20.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.715 --rc genhtml_branch_coverage=1 00:32:20.715 --rc genhtml_function_coverage=1 00:32:20.715 --rc genhtml_legend=1 00:32:20.715 --rc geninfo_all_blocks=1 00:32:20.715 --rc geninfo_unexecuted_blocks=1 00:32:20.715 00:32:20.715 ' 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:20.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.715 --rc genhtml_branch_coverage=1 00:32:20.715 --rc genhtml_function_coverage=1 00:32:20.715 --rc genhtml_legend=1 00:32:20.715 --rc geninfo_all_blocks=1 00:32:20.715 --rc geninfo_unexecuted_blocks=1 00:32:20.715 00:32:20.715 ' 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:20.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.715 --rc genhtml_branch_coverage=1 00:32:20.715 --rc genhtml_function_coverage=1 00:32:20.715 --rc genhtml_legend=1 00:32:20.715 --rc geninfo_all_blocks=1 00:32:20.715 --rc geninfo_unexecuted_blocks=1 00:32:20.715 00:32:20.715 ' 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:20.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:20.715 --rc genhtml_branch_coverage=1 00:32:20.715 --rc genhtml_function_coverage=1 00:32:20.715 --rc genhtml_legend=1 00:32:20.715 --rc geninfo_all_blocks=1 00:32:20.715 --rc geninfo_unexecuted_blocks=1 00:32:20.715 00:32:20.715 ' 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.715 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:32:20.716 06:42:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:22.702 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:22.702 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:22.702 Found net devices under 0000:09:00.0: cvl_0_0 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:22.702 Found net devices under 0000:09:00.1: cvl_0_1 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:22.702 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:22.703 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:22.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:22.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:32:22.962 00:32:22.962 --- 10.0.0.2 ping statistics --- 00:32:22.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.962 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:22.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:22.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:32:22.962 00:32:22.962 --- 10.0.0.1 ping statistics --- 00:32:22.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:22.962 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:22.962 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=2235394 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 2235394 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # '[' -z 2235394 ']' 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:22.963 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:22.963 [2024-11-20 06:42:54.681300] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:22.963 [2024-11-20 06:42:54.682361] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:22.963 [2024-11-20 06:42:54.682413] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:22.963 [2024-11-20 06:42:54.754131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:23.227 [2024-11-20 06:42:54.817685] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:23.227 [2024-11-20 06:42:54.817727] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:23.227 [2024-11-20 06:42:54.817756] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:23.227 [2024-11-20 06:42:54.817768] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:23.227 [2024-11-20 06:42:54.817779] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:23.227 [2024-11-20 06:42:54.819459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:23.227 [2024-11-20 06:42:54.819647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:23.227 [2024-11-20 06:42:54.819651] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.227 [2024-11-20 06:42:54.820116] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:23.227 [2024-11-20 06:42:54.819484] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@866 -- # return 0 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.227 06:42:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:23.227 [2024-11-20 06:42:55.026280] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:23.227 [2024-11-20 06:42:55.026504] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:23.227 [2024-11-20 06:42:55.027462] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:23.227 [2024-11-20 06:42:55.028257] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:23.227 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.227 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:23.227 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.227 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:23.227 [2024-11-20 06:42:55.040364] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:23.227 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.227 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:23.227 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.227 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:23.487 Malloc0 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:23.487 [2024-11-20 06:42:55.100496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2235537 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2235538 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2235541 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:23.487 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:23.488 { 00:32:23.488 "params": { 00:32:23.488 "name": "Nvme$subsystem", 00:32:23.488 "trtype": "$TEST_TRANSPORT", 00:32:23.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:23.488 "adrfam": "ipv4", 00:32:23.488 "trsvcid": "$NVMF_PORT", 00:32:23.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:23.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:23.488 "hdgst": ${hdgst:-false}, 00:32:23.488 "ddgst": ${ddgst:-false} 00:32:23.488 }, 00:32:23.488 "method": "bdev_nvme_attach_controller" 00:32:23.488 } 00:32:23.488 EOF 00:32:23.488 )") 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2235543 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:23.488 { 00:32:23.488 "params": { 00:32:23.488 "name": "Nvme$subsystem", 00:32:23.488 "trtype": "$TEST_TRANSPORT", 00:32:23.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:23.488 "adrfam": "ipv4", 00:32:23.488 "trsvcid": "$NVMF_PORT", 00:32:23.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:23.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:23.488 "hdgst": ${hdgst:-false}, 00:32:23.488 "ddgst": ${ddgst:-false} 00:32:23.488 }, 00:32:23.488 "method": "bdev_nvme_attach_controller" 00:32:23.488 } 00:32:23.488 EOF 00:32:23.488 )") 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:23.488 { 00:32:23.488 "params": { 00:32:23.488 "name": "Nvme$subsystem", 00:32:23.488 "trtype": "$TEST_TRANSPORT", 00:32:23.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:23.488 "adrfam": "ipv4", 00:32:23.488 "trsvcid": "$NVMF_PORT", 00:32:23.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:23.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:23.488 "hdgst": ${hdgst:-false}, 00:32:23.488 "ddgst": ${ddgst:-false} 00:32:23.488 }, 00:32:23.488 "method": "bdev_nvme_attach_controller" 00:32:23.488 } 00:32:23.488 EOF 00:32:23.488 )") 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:23.488 { 00:32:23.488 "params": { 00:32:23.488 "name": "Nvme$subsystem", 00:32:23.488 "trtype": "$TEST_TRANSPORT", 00:32:23.488 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:23.488 "adrfam": "ipv4", 00:32:23.488 "trsvcid": "$NVMF_PORT", 00:32:23.488 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:23.488 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:23.488 "hdgst": ${hdgst:-false}, 00:32:23.488 "ddgst": ${ddgst:-false} 00:32:23.488 }, 00:32:23.488 "method": "bdev_nvme_attach_controller" 00:32:23.488 } 00:32:23.488 EOF 00:32:23.488 )") 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2235537 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:23.488 "params": { 00:32:23.488 "name": "Nvme1", 00:32:23.488 "trtype": "tcp", 00:32:23.488 "traddr": "10.0.0.2", 00:32:23.488 "adrfam": "ipv4", 00:32:23.488 "trsvcid": "4420", 00:32:23.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:23.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:23.488 "hdgst": false, 00:32:23.488 "ddgst": false 00:32:23.488 }, 00:32:23.488 "method": "bdev_nvme_attach_controller" 00:32:23.488 }' 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:23.488 "params": { 00:32:23.488 "name": "Nvme1", 00:32:23.488 "trtype": "tcp", 00:32:23.488 "traddr": "10.0.0.2", 00:32:23.488 "adrfam": "ipv4", 00:32:23.488 "trsvcid": "4420", 00:32:23.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:23.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:23.488 "hdgst": false, 00:32:23.488 "ddgst": false 00:32:23.488 }, 00:32:23.488 "method": "bdev_nvme_attach_controller" 00:32:23.488 }' 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:23.488 "params": { 00:32:23.488 "name": "Nvme1", 00:32:23.488 "trtype": "tcp", 00:32:23.488 "traddr": "10.0.0.2", 00:32:23.488 "adrfam": "ipv4", 00:32:23.488 "trsvcid": "4420", 00:32:23.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:23.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:23.488 "hdgst": false, 00:32:23.488 "ddgst": false 00:32:23.488 }, 00:32:23.488 "method": "bdev_nvme_attach_controller" 00:32:23.488 }' 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:32:23.488 06:42:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:23.488 "params": { 00:32:23.488 "name": "Nvme1", 00:32:23.488 "trtype": "tcp", 00:32:23.488 "traddr": "10.0.0.2", 00:32:23.488 "adrfam": "ipv4", 00:32:23.488 "trsvcid": "4420", 00:32:23.488 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:23.488 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:23.488 "hdgst": false, 00:32:23.488 "ddgst": false 00:32:23.488 }, 00:32:23.488 "method": "bdev_nvme_attach_controller" 00:32:23.488 }' 00:32:23.488 [2024-11-20 06:42:55.153816] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:23.488 [2024-11-20 06:42:55.153816] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:23.488 [2024-11-20 06:42:55.153809] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:23.488 [2024-11-20 06:42:55.153819] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:23.488 [2024-11-20 06:42:55.153899] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 06:42:55.153899] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 06:42:55.153899] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-11-20 06:42:55.153901] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:32:23.488 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:32:23.488 --proc-type=auto ] 00:32:23.488 --proc-type=auto ] 00:32:23.748 [2024-11-20 06:42:55.339721] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.748 [2024-11-20 06:42:55.395715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:23.748 [2024-11-20 06:42:55.444497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:23.748 [2024-11-20 06:42:55.500588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:23.748 [2024-11-20 06:42:55.548609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.006 [2024-11-20 06:42:55.606401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:24.006 [2024-11-20 06:42:55.627673] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:24.006 [2024-11-20 06:42:55.680201] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:32:24.006 Running I/O for 1 seconds... 00:32:24.264 Running I/O for 1 seconds... 00:32:24.264 Running I/O for 1 seconds... 00:32:24.264 Running I/O for 1 seconds... 00:32:25.199 194952.00 IOPS, 761.53 MiB/s 00:32:25.199 Latency(us) 00:32:25.199 [2024-11-20T05:42:57.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.199 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:32:25.199 Nvme1n1 : 1.00 194583.09 760.09 0.00 0.00 654.29 292.79 1856.85 00:32:25.199 [2024-11-20T05:42:57.035Z] =================================================================================================================== 00:32:25.199 [2024-11-20T05:42:57.035Z] Total : 194583.09 760.09 0.00 0.00 654.29 292.79 1856.85 00:32:25.199 6515.00 IOPS, 25.45 MiB/s 00:32:25.199 Latency(us) 00:32:25.199 [2024-11-20T05:42:57.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.199 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:32:25.199 Nvme1n1 : 1.02 6517.92 25.46 0.00 0.00 19460.37 4102.07 32428.18 00:32:25.199 [2024-11-20T05:42:57.035Z] =================================================================================================================== 00:32:25.199 [2024-11-20T05:42:57.035Z] Total : 6517.92 25.46 0.00 0.00 19460.37 4102.07 32428.18 00:32:25.199 9692.00 IOPS, 37.86 MiB/s 00:32:25.199 Latency(us) 00:32:25.199 [2024-11-20T05:42:57.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.199 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:32:25.199 Nvme1n1 : 1.01 9752.47 38.10 0.00 0.00 13068.74 5752.60 18738.44 00:32:25.199 [2024-11-20T05:42:57.035Z] =================================================================================================================== 00:32:25.199 [2024-11-20T05:42:57.035Z] Total : 9752.47 38.10 0.00 0.00 13068.74 5752.60 18738.44 00:32:25.199 6445.00 IOPS, 25.18 MiB/s 00:32:25.199 Latency(us) 00:32:25.199 [2024-11-20T05:42:57.035Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.199 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:32:25.199 Nvme1n1 : 1.01 6574.65 25.68 0.00 0.00 19414.75 4077.80 37671.06 00:32:25.199 [2024-11-20T05:42:57.035Z] =================================================================================================================== 00:32:25.199 [2024-11-20T05:42:57.035Z] Total : 6574.65 25.68 0.00 0.00 19414.75 4077.80 37671.06 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2235538 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2235541 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2235543 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:25.459 rmmod nvme_tcp 00:32:25.459 rmmod nvme_fabrics 00:32:25.459 rmmod nvme_keyring 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 2235394 ']' 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 2235394 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # '[' -z 2235394 ']' 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # kill -0 2235394 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # uname 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:25.459 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2235394 00:32:25.460 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:25.460 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:25.460 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2235394' 00:32:25.460 killing process with pid 2235394 00:32:25.460 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@971 -- # kill 2235394 00:32:25.460 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@976 -- # wait 2235394 00:32:25.720 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:25.720 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:25.720 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:25.720 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:32:25.720 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:32:25.721 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:25.721 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:32:25.721 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:25.721 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:25.721 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:25.721 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:25.721 06:42:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:28.259 00:32:28.259 real 0m7.269s 00:32:28.259 user 0m14.633s 00:32:28.259 sys 0m4.081s 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:32:28.259 ************************************ 00:32:28.259 END TEST nvmf_bdev_io_wait 00:32:28.259 ************************************ 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:28.259 ************************************ 00:32:28.259 START TEST nvmf_queue_depth 00:32:28.259 ************************************ 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:32:28.259 * Looking for test storage... 00:32:28.259 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lcov --version 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:32:28.259 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:28.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.260 --rc genhtml_branch_coverage=1 00:32:28.260 --rc genhtml_function_coverage=1 00:32:28.260 --rc genhtml_legend=1 00:32:28.260 --rc geninfo_all_blocks=1 00:32:28.260 --rc geninfo_unexecuted_blocks=1 00:32:28.260 00:32:28.260 ' 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:28.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.260 --rc genhtml_branch_coverage=1 00:32:28.260 --rc genhtml_function_coverage=1 00:32:28.260 --rc genhtml_legend=1 00:32:28.260 --rc geninfo_all_blocks=1 00:32:28.260 --rc geninfo_unexecuted_blocks=1 00:32:28.260 00:32:28.260 ' 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:28.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.260 --rc genhtml_branch_coverage=1 00:32:28.260 --rc genhtml_function_coverage=1 00:32:28.260 --rc genhtml_legend=1 00:32:28.260 --rc geninfo_all_blocks=1 00:32:28.260 --rc geninfo_unexecuted_blocks=1 00:32:28.260 00:32:28.260 ' 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:28.260 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:28.260 --rc genhtml_branch_coverage=1 00:32:28.260 --rc genhtml_function_coverage=1 00:32:28.260 --rc genhtml_legend=1 00:32:28.260 --rc geninfo_all_blocks=1 00:32:28.260 --rc geninfo_unexecuted_blocks=1 00:32:28.260 00:32:28.260 ' 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:32:28.260 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:32:28.261 06:42:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:30.168 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:30.168 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:30.168 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:30.169 Found net devices under 0000:09:00.0: cvl_0_0 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:30.169 Found net devices under 0000:09:00.1: cvl_0_1 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:30.169 06:43:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:30.427 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:30.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:30.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:32:30.428 00:32:30.428 --- 10.0.0.2 ping statistics --- 00:32:30.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.428 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:30.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:30.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.116 ms 00:32:30.428 00:32:30.428 --- 10.0.0.1 ping statistics --- 00:32:30.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:30.428 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=2237768 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 2237768 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2237768 ']' 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:30.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:30.428 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.428 [2024-11-20 06:43:02.128941] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:30.428 [2024-11-20 06:43:02.129971] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:30.428 [2024-11-20 06:43:02.130025] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:30.428 [2024-11-20 06:43:02.205034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.428 [2024-11-20 06:43:02.261594] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:30.428 [2024-11-20 06:43:02.261650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:30.428 [2024-11-20 06:43:02.261672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:30.428 [2024-11-20 06:43:02.261704] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:30.428 [2024-11-20 06:43:02.261729] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:30.687 [2024-11-20 06:43:02.262418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:30.687 [2024-11-20 06:43:02.349374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:30.687 [2024-11-20 06:43:02.349705] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.687 [2024-11-20 06:43:02.399052] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.687 Malloc0 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.687 [2024-11-20 06:43:02.463128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2237787 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2237787 /var/tmp/bdevperf.sock 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@833 -- # '[' -z 2237787 ']' 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:30.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:30.687 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:30.687 [2024-11-20 06:43:02.509286] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:30.688 [2024-11-20 06:43:02.509365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2237787 ] 00:32:30.946 [2024-11-20 06:43:02.576313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:30.946 [2024-11-20 06:43:02.633749] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:30.946 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:30.946 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@866 -- # return 0 00:32:30.946 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:30.946 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:30.946 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:31.206 NVMe0n1 00:32:31.206 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:31.206 06:43:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:31.466 Running I/O for 10 seconds... 00:32:33.338 8331.00 IOPS, 32.54 MiB/s [2024-11-20T05:43:06.115Z] 8681.00 IOPS, 33.91 MiB/s [2024-11-20T05:43:07.491Z] 8571.00 IOPS, 33.48 MiB/s [2024-11-20T05:43:08.432Z] 8693.00 IOPS, 33.96 MiB/s [2024-11-20T05:43:09.373Z] 8634.80 IOPS, 33.73 MiB/s [2024-11-20T05:43:10.323Z] 8699.33 IOPS, 33.98 MiB/s [2024-11-20T05:43:11.261Z] 8704.14 IOPS, 34.00 MiB/s [2024-11-20T05:43:12.200Z] 8706.50 IOPS, 34.01 MiB/s [2024-11-20T05:43:13.136Z] 8740.56 IOPS, 34.14 MiB/s [2024-11-20T05:43:13.396Z] 8732.90 IOPS, 34.11 MiB/s 00:32:41.560 Latency(us) 00:32:41.560 [2024-11-20T05:43:13.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.560 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:32:41.560 Verification LBA range: start 0x0 length 0x4000 00:32:41.560 NVMe0n1 : 10.07 8772.31 34.27 0.00 0.00 116210.47 12621.75 68351.62 00:32:41.560 [2024-11-20T05:43:13.396Z] =================================================================================================================== 00:32:41.560 [2024-11-20T05:43:13.396Z] Total : 8772.31 34.27 0.00 0.00 116210.47 12621.75 68351.62 00:32:41.560 { 00:32:41.560 "results": [ 00:32:41.560 { 00:32:41.560 "job": "NVMe0n1", 00:32:41.560 "core_mask": "0x1", 00:32:41.560 "workload": "verify", 00:32:41.560 "status": "finished", 00:32:41.560 "verify_range": { 00:32:41.560 "start": 0, 00:32:41.560 "length": 16384 00:32:41.560 }, 00:32:41.560 "queue_depth": 1024, 00:32:41.560 "io_size": 4096, 00:32:41.560 "runtime": 10.067935, 00:32:41.560 "iops": 8772.305343647928, 00:32:41.560 "mibps": 34.26681774862472, 00:32:41.560 "io_failed": 0, 00:32:41.560 "io_timeout": 0, 00:32:41.560 "avg_latency_us": 116210.46971261167, 00:32:41.560 "min_latency_us": 12621.748148148148, 00:32:41.560 "max_latency_us": 68351.62074074073 00:32:41.560 } 00:32:41.560 ], 00:32:41.560 "core_count": 1 00:32:41.560 } 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2237787 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2237787 ']' 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2237787 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2237787 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2237787' 00:32:41.560 killing process with pid 2237787 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2237787 00:32:41.560 Received shutdown signal, test time was about 10.000000 seconds 00:32:41.560 00:32:41.560 Latency(us) 00:32:41.560 [2024-11-20T05:43:13.396Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:41.560 [2024-11-20T05:43:13.396Z] =================================================================================================================== 00:32:41.560 [2024-11-20T05:43:13.396Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:41.560 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2237787 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:41.820 rmmod nvme_tcp 00:32:41.820 rmmod nvme_fabrics 00:32:41.820 rmmod nvme_keyring 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 2237768 ']' 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 2237768 00:32:41.820 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@952 -- # '[' -z 2237768 ']' 00:32:41.821 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@956 -- # kill -0 2237768 00:32:41.821 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # uname 00:32:41.821 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:32:41.821 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2237768 00:32:41.821 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:32:41.821 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:32:41.821 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2237768' 00:32:41.821 killing process with pid 2237768 00:32:41.821 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@971 -- # kill 2237768 00:32:41.821 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@976 -- # wait 2237768 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:42.080 06:43:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:43.989 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:43.989 00:32:43.989 real 0m16.248s 00:32:43.989 user 0m22.357s 00:32:43.989 sys 0m3.416s 00:32:43.989 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:43.989 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:32:43.989 ************************************ 00:32:43.989 END TEST nvmf_queue_depth 00:32:43.989 ************************************ 00:32:44.249 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:44.249 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:44.249 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:44.249 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:44.249 ************************************ 00:32:44.249 START TEST nvmf_target_multipath 00:32:44.249 ************************************ 00:32:44.249 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:32:44.249 * Looking for test storage... 00:32:44.249 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:44.249 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:44.249 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lcov --version 00:32:44.249 06:43:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:44.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.249 --rc genhtml_branch_coverage=1 00:32:44.249 --rc genhtml_function_coverage=1 00:32:44.249 --rc genhtml_legend=1 00:32:44.249 --rc geninfo_all_blocks=1 00:32:44.249 --rc geninfo_unexecuted_blocks=1 00:32:44.249 00:32:44.249 ' 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:44.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.249 --rc genhtml_branch_coverage=1 00:32:44.249 --rc genhtml_function_coverage=1 00:32:44.249 --rc genhtml_legend=1 00:32:44.249 --rc geninfo_all_blocks=1 00:32:44.249 --rc geninfo_unexecuted_blocks=1 00:32:44.249 00:32:44.249 ' 00:32:44.249 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:44.249 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.250 --rc genhtml_branch_coverage=1 00:32:44.250 --rc genhtml_function_coverage=1 00:32:44.250 --rc genhtml_legend=1 00:32:44.250 --rc geninfo_all_blocks=1 00:32:44.250 --rc geninfo_unexecuted_blocks=1 00:32:44.250 00:32:44.250 ' 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:44.250 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:44.250 --rc genhtml_branch_coverage=1 00:32:44.250 --rc genhtml_function_coverage=1 00:32:44.250 --rc genhtml_legend=1 00:32:44.250 --rc geninfo_all_blocks=1 00:32:44.250 --rc geninfo_unexecuted_blocks=1 00:32:44.250 00:32:44.250 ' 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:32:44.250 06:43:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:46.787 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:46.787 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:46.787 Found net devices under 0000:09:00.0: cvl_0_0 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:46.787 Found net devices under 0000:09:00.1: cvl_0_1 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:46.787 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:46.788 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.788 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.298 ms 00:32:46.788 00:32:46.788 --- 10.0.0.2 ping statistics --- 00:32:46.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.788 rtt min/avg/max/mdev = 0.298/0.298/0.298/0.000 ms 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:46.788 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.788 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.124 ms 00:32:46.788 00:32:46.788 --- 10.0.0.1 ping statistics --- 00:32:46.788 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.788 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:32:46.788 only one NIC for nvmf test 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:46.788 rmmod nvme_tcp 00:32:46.788 rmmod nvme_fabrics 00:32:46.788 rmmod nvme_keyring 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:46.788 06:43:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:48.697 00:32:48.697 real 0m4.489s 00:32:48.697 user 0m0.895s 00:32:48.697 sys 0m1.589s 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1128 -- # xtrace_disable 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:32:48.697 ************************************ 00:32:48.697 END TEST nvmf_target_multipath 00:32:48.697 ************************************ 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:48.697 ************************************ 00:32:48.697 START TEST nvmf_zcopy 00:32:48.697 ************************************ 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:48.697 * Looking for test storage... 00:32:48.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lcov --version 00:32:48.697 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:32:48.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.959 --rc genhtml_branch_coverage=1 00:32:48.959 --rc genhtml_function_coverage=1 00:32:48.959 --rc genhtml_legend=1 00:32:48.959 --rc geninfo_all_blocks=1 00:32:48.959 --rc geninfo_unexecuted_blocks=1 00:32:48.959 00:32:48.959 ' 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:32:48.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.959 --rc genhtml_branch_coverage=1 00:32:48.959 --rc genhtml_function_coverage=1 00:32:48.959 --rc genhtml_legend=1 00:32:48.959 --rc geninfo_all_blocks=1 00:32:48.959 --rc geninfo_unexecuted_blocks=1 00:32:48.959 00:32:48.959 ' 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:32:48.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.959 --rc genhtml_branch_coverage=1 00:32:48.959 --rc genhtml_function_coverage=1 00:32:48.959 --rc genhtml_legend=1 00:32:48.959 --rc geninfo_all_blocks=1 00:32:48.959 --rc geninfo_unexecuted_blocks=1 00:32:48.959 00:32:48.959 ' 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:32:48.959 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:48.959 --rc genhtml_branch_coverage=1 00:32:48.959 --rc genhtml_function_coverage=1 00:32:48.959 --rc genhtml_legend=1 00:32:48.959 --rc geninfo_all_blocks=1 00:32:48.959 --rc geninfo_unexecuted_blocks=1 00:32:48.959 00:32:48.959 ' 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:32:48.959 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:32:48.960 06:43:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:32:50.867 Found 0000:09:00.0 (0x8086 - 0x159b) 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.867 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:32:50.868 Found 0000:09:00.1 (0x8086 - 0x159b) 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:32:50.868 Found net devices under 0000:09:00.0: cvl_0_0 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:32:50.868 Found net devices under 0000:09:00.1: cvl_0_1 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:50.868 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:51.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:51.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.131 ms 00:32:51.127 00:32:51.127 --- 10.0.0.2 ping statistics --- 00:32:51.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.127 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:51.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:51.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.066 ms 00:32:51.127 00:32:51.127 --- 10.0.0.1 ping statistics --- 00:32:51.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:51.127 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=2242966 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 2242966 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@833 -- # '[' -z 2242966 ']' 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@838 -- # local max_retries=100 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:51.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # xtrace_disable 00:32:51.127 06:43:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.127 [2024-11-20 06:43:22.796280] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:51.127 [2024-11-20 06:43:22.797351] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:51.127 [2024-11-20 06:43:22.797405] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:51.127 [2024-11-20 06:43:22.870486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.127 [2024-11-20 06:43:22.926604] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:51.127 [2024-11-20 06:43:22.926655] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:51.127 [2024-11-20 06:43:22.926676] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:51.127 [2024-11-20 06:43:22.926692] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:51.127 [2024-11-20 06:43:22.926708] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:51.127 [2024-11-20 06:43:22.927308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:51.388 [2024-11-20 06:43:23.015964] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:51.388 [2024-11-20 06:43:23.016271] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@866 -- # return 0 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.388 [2024-11-20 06:43:23.063951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.388 [2024-11-20 06:43:23.080100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.388 malloc0 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:32:51.388 { 00:32:51.388 "params": { 00:32:51.388 "name": "Nvme$subsystem", 00:32:51.388 "trtype": "$TEST_TRANSPORT", 00:32:51.388 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:51.388 "adrfam": "ipv4", 00:32:51.388 "trsvcid": "$NVMF_PORT", 00:32:51.388 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:51.388 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:51.388 "hdgst": ${hdgst:-false}, 00:32:51.388 "ddgst": ${ddgst:-false} 00:32:51.388 }, 00:32:51.388 "method": "bdev_nvme_attach_controller" 00:32:51.388 } 00:32:51.388 EOF 00:32:51.388 )") 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:32:51.388 06:43:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:32:51.388 "params": { 00:32:51.388 "name": "Nvme1", 00:32:51.388 "trtype": "tcp", 00:32:51.388 "traddr": "10.0.0.2", 00:32:51.388 "adrfam": "ipv4", 00:32:51.388 "trsvcid": "4420", 00:32:51.388 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:51.388 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:51.388 "hdgst": false, 00:32:51.388 "ddgst": false 00:32:51.388 }, 00:32:51.388 "method": "bdev_nvme_attach_controller" 00:32:51.388 }' 00:32:51.388 [2024-11-20 06:43:23.157593] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:32:51.388 [2024-11-20 06:43:23.157681] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2242993 ] 00:32:51.647 [2024-11-20 06:43:23.222828] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:51.647 [2024-11-20 06:43:23.284125] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:51.647 Running I/O for 10 seconds... 00:32:53.966 5620.00 IOPS, 43.91 MiB/s [2024-11-20T05:43:26.741Z] 5704.50 IOPS, 44.57 MiB/s [2024-11-20T05:43:27.678Z] 5714.67 IOPS, 44.65 MiB/s [2024-11-20T05:43:28.667Z] 5709.50 IOPS, 44.61 MiB/s [2024-11-20T05:43:29.601Z] 5712.80 IOPS, 44.63 MiB/s [2024-11-20T05:43:30.535Z] 5726.50 IOPS, 44.74 MiB/s [2024-11-20T05:43:31.909Z] 5728.71 IOPS, 44.76 MiB/s [2024-11-20T05:43:32.842Z] 5730.88 IOPS, 44.77 MiB/s [2024-11-20T05:43:33.776Z] 5736.44 IOPS, 44.82 MiB/s [2024-11-20T05:43:33.776Z] 5742.40 IOPS, 44.86 MiB/s 00:33:01.940 Latency(us) 00:33:01.941 [2024-11-20T05:43:33.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:01.941 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:33:01.941 Verification LBA range: start 0x0 length 0x1000 00:33:01.941 Nvme1n1 : 10.02 5743.93 44.87 0.00 0.00 22222.21 1074.06 30292.20 00:33:01.941 [2024-11-20T05:43:33.777Z] =================================================================================================================== 00:33:01.941 [2024-11-20T05:43:33.777Z] Total : 5743.93 44.87 0.00 0.00 22222.21 1074.06 30292.20 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2244177 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:01.941 { 00:33:01.941 "params": { 00:33:01.941 "name": "Nvme$subsystem", 00:33:01.941 "trtype": "$TEST_TRANSPORT", 00:33:01.941 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:01.941 "adrfam": "ipv4", 00:33:01.941 "trsvcid": "$NVMF_PORT", 00:33:01.941 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:01.941 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:01.941 "hdgst": ${hdgst:-false}, 00:33:01.941 "ddgst": ${ddgst:-false} 00:33:01.941 }, 00:33:01.941 "method": "bdev_nvme_attach_controller" 00:33:01.941 } 00:33:01.941 EOF 00:33:01.941 )") 00:33:01.941 [2024-11-20 06:43:33.743914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.941 [2024-11-20 06:43:33.743958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:33:01.941 06:43:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:01.941 "params": { 00:33:01.941 "name": "Nvme1", 00:33:01.941 "trtype": "tcp", 00:33:01.941 "traddr": "10.0.0.2", 00:33:01.941 "adrfam": "ipv4", 00:33:01.941 "trsvcid": "4420", 00:33:01.941 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:01.941 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:01.941 "hdgst": false, 00:33:01.941 "ddgst": false 00:33:01.941 }, 00:33:01.941 "method": "bdev_nvme_attach_controller" 00:33:01.941 }' 00:33:01.941 [2024-11-20 06:43:33.751826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.941 [2024-11-20 06:43:33.751850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.941 [2024-11-20 06:43:33.759824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.941 [2024-11-20 06:43:33.759847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:01.941 [2024-11-20 06:43:33.767822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:01.941 [2024-11-20 06:43:33.767844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.775820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.775841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.783822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.783842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.786651] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:33:02.200 [2024-11-20 06:43:33.786707] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2244177 ] 00:33:02.200 [2024-11-20 06:43:33.791823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.791847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.799824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.799847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.807820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.807841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.815821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.815842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.823821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.823842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.831819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.831840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.839819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.839839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.847819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.847839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.855556] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:02.200 [2024-11-20 06:43:33.855823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.855849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.863857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.863890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.871862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.871897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.879819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.879841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.887819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.887840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.895819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.895840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.903821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.903841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.911820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.911841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.916292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:02.200 [2024-11-20 06:43:33.919822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.919843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.927820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.927841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.935858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.935891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.943864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.943900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.951862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.951898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.959861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.959897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.967862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.967897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.975880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.975917] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.983822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.983843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.991862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.991897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:33.999859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:33.999893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:34.007860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:34.007894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:34.015820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:34.015842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:34.023820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:34.023841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.200 [2024-11-20 06:43:34.031854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.200 [2024-11-20 06:43:34.031895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.039824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.039848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.047823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.047846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.055824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.055848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.063824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.063847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.071832] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.071855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.079836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.079873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.087823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.087847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.095827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.095851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 Running I/O for 5 seconds... 00:33:02.459 [2024-11-20 06:43:34.111589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.111632] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.122684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.122710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.135583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.135612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.144841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.144868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.156884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.156908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.167684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.167709] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.178819] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.178850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.193698] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.193740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.202648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.202689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.214108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.214134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.229646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.229672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.239177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.239202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.251177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.251202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.264350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.264380] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.274048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.274076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.459 [2024-11-20 06:43:34.285869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.459 [2024-11-20 06:43:34.285894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.301413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.301453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.310826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.310865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.324864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.324889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.334354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.334382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.348951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.348977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.359994] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.360019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.370365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.370407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.384627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.384670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.394419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.394445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.408544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.408579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.418571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.418597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.432368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.432395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.442056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.442082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.456104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.456129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.465330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.465357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.717 [2024-11-20 06:43:34.477174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.717 [2024-11-20 06:43:34.477199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.718 [2024-11-20 06:43:34.487461] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.718 [2024-11-20 06:43:34.487487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.718 [2024-11-20 06:43:34.502083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.718 [2024-11-20 06:43:34.502108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.718 [2024-11-20 06:43:34.511497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.718 [2024-11-20 06:43:34.511524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.718 [2024-11-20 06:43:34.523140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.718 [2024-11-20 06:43:34.523164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.718 [2024-11-20 06:43:34.535446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.718 [2024-11-20 06:43:34.535474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.718 [2024-11-20 06:43:34.544956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.718 [2024-11-20 06:43:34.544980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.556095] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.556119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.566328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.566369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.581577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.581621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.591391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.591418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.603402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.603430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.614658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.614682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.628694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.628744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.638271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.638321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.652791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.652830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.663151] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.663176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.678743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.678768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.694288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.694346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.710117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.710159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.719939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.719970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.732039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.732071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.743158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.743189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.754209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.754237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.765803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.765833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.781372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.781404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.791209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.791251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:02.976 [2024-11-20 06:43:34.803348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:02.976 [2024-11-20 06:43:34.803376] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.234 [2024-11-20 06:43:34.816119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.234 [2024-11-20 06:43:34.816147] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.825883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.825909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.837784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.837809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.852048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.852073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.861919] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.861954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.873768] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.873792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.888268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.888315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.897551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.897594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.909276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.909325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.919823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.919847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.930704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.930728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.946111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.946137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.962224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.962265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.971567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.971594] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.983195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.983221] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:34.997774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:34.997801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:35.007160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:35.007202] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:35.018856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:35.018882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:35.033862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:35.033904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:35.043410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:35.043438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:35.055378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:35.055406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.235 [2024-11-20 06:43:35.068207] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.235 [2024-11-20 06:43:35.068235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.077561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.077612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.089455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.089491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 11651.00 IOPS, 91.02 MiB/s [2024-11-20T05:43:35.329Z] [2024-11-20 06:43:35.105372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.105400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.115404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.115431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.127427] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.127454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.139799] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.139826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.149709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.149736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.161944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.161970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.176744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.176785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.186546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.186574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.202615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.202656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.217938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.217964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.227456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.227484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.239633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.239673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.250564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.250604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.266032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.266058] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.275708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.275734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.287410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.287437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.297896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.297921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.313841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.313866] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.493 [2024-11-20 06:43:35.322996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.493 [2024-11-20 06:43:35.323021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.751 [2024-11-20 06:43:35.334719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.751 [2024-11-20 06:43:35.334744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.751 [2024-11-20 06:43:35.349344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.751 [2024-11-20 06:43:35.349372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.751 [2024-11-20 06:43:35.359165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.751 [2024-11-20 06:43:35.359191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.751 [2024-11-20 06:43:35.371028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.751 [2024-11-20 06:43:35.371054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.751 [2024-11-20 06:43:35.385523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.751 [2024-11-20 06:43:35.385550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.395028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.395066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.407006] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.407031] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.421422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.421450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.430514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.430541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.442170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.442196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.457984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.458024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.467287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.467336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.478696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.478722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.494280] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.494317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.503943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.503970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.515631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.515672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.526423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.526449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.542328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.542357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.558017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.558044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.567351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.567393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:03.752 [2024-11-20 06:43:35.579697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:03.752 [2024-11-20 06:43:35.579723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.590723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.590763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.603829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.603871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.613049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.613074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.624996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.625020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.635752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.635775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.646483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.646510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.661312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.661352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.670902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.670928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.685136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.685160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.694752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.694778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.709030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.709055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.718390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.718418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.734135] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.734161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.751965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.752005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.762682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.762707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.775131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.775168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.784833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.784859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.796838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.796862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.807576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.807618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.818434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.818460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.833632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.833671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.010 [2024-11-20 06:43:35.843149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.010 [2024-11-20 06:43:35.843174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.854808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.854834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.870554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.870595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.885854] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.885881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.903792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.903819] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.913653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.913694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.925497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.925524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.941759] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.941796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.951261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.951312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.963588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.963629] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.974740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.974765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.990128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.990154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:35.999731] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:35.999759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:36.011399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:36.011435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:36.022367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:36.022394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:36.037722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:36.037762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:36.047408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:36.047436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:36.059338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:36.059366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:36.070374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:36.070400] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:36.084796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:36.084823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.268 [2024-11-20 06:43:36.094399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.268 [2024-11-20 06:43:36.094427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.526 11672.00 IOPS, 91.19 MiB/s [2024-11-20T05:43:36.362Z] [2024-11-20 06:43:36.109734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.526 [2024-11-20 06:43:36.109758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.526 [2024-11-20 06:43:36.119449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.526 [2024-11-20 06:43:36.119474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.526 [2024-11-20 06:43:36.131374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.526 [2024-11-20 06:43:36.131401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.526 [2024-11-20 06:43:36.142162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.526 [2024-11-20 06:43:36.142187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.526 [2024-11-20 06:43:36.158101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.526 [2024-11-20 06:43:36.158141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.526 [2024-11-20 06:43:36.176049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.526 [2024-11-20 06:43:36.176076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.526 [2024-11-20 06:43:36.186061] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.526 [2024-11-20 06:43:36.186088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.197954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.197980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.213851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.213878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.223236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.223268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.235067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.235092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.247816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.247852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.256950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.256976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.268749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.268775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.278780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.278806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.293138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.293177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.302632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.302657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.318406] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.318432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.333794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.333833] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.342722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.342749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.527 [2024-11-20 06:43:36.358435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.527 [2024-11-20 06:43:36.358463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.374059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.374085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.383565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.383604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.395233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.395259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.408892] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.408919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.418474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.418515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.430346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.430386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.446706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.446731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.462472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.462499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.477935] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.477962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.495430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.495456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.505108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.505136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.517088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.517128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.527746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.527770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.538455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.538483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.551376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.551404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.560770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.560795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.572532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.572559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.583410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.583436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.594087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.594112] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:04.785 [2024-11-20 06:43:36.610274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:04.785 [2024-11-20 06:43:36.610300] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.043 [2024-11-20 06:43:36.619833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.619861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.631838] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.631862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.642658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.642682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.656573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.656614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.666043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.666069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.677880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.677904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.691675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.691702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.701372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.701399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.717020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.717044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.726975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.727000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.741625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.741652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.758878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.758905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.773572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.773600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.783102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.783128] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.795080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.795105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.809747] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.809774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.819338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.819379] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.831346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.831388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.842454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.842496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.854934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.854961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.044 [2024-11-20 06:43:36.868452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.044 [2024-11-20 06:43:36.868479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:36.878488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:36.878516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:36.890211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:36.890237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:36.905970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:36.905996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:36.915292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:36.915345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:36.927024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:36.927063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:36.939736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:36.939766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:36.949271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:36.949319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:36.964683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:36.964725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:36.974022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:36.974050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:36.985733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:36.985757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:37.000299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.000338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:37.009608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.009634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:37.021309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.021336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:37.038849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.038889] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:37.054043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.054072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:37.063313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.063341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:37.075066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.075093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:37.086041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.086080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:37.101950] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.101989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 11679.00 IOPS, 91.24 MiB/s [2024-11-20T05:43:37.138Z] [2024-11-20 06:43:37.111205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.111231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.302 [2024-11-20 06:43:37.122681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.302 [2024-11-20 06:43:37.122705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.560 [2024-11-20 06:43:37.138483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.560 [2024-11-20 06:43:37.138511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.154027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.154054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.171612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.171639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.181212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.181264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.192447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.192474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.203130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.203155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.218867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.218894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.234171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.234198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.252036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.252062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.261857] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.261884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.273488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.273515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.284366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.284392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.295158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.295184] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.308164] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.308192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.318174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.318200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.332737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.332763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.342261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.342286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.356477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.356505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.366350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.366377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.561 [2024-11-20 06:43:37.382354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.561 [2024-11-20 06:43:37.382382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.399636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.399663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.409345] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.409385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.420737] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.420772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.431544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.431571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.445454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.445483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.455022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.455046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.469837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.469862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.488917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.488944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.499278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.499327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.510214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.510251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.525550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.525579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.535086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.535125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.546869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.546908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.562155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.562183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.571685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.571711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.583621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.583662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.594255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.594279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.609856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.609898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.619388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.619413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.631336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.631378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:05.818 [2024-11-20 06:43:37.644080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:05.818 [2024-11-20 06:43:37.644106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.075 [2024-11-20 06:43:37.653823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.075 [2024-11-20 06:43:37.653874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.075 [2024-11-20 06:43:37.666065] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.075 [2024-11-20 06:43:37.666089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.075 [2024-11-20 06:43:37.682009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.075 [2024-11-20 06:43:37.682034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.075 [2024-11-20 06:43:37.691550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.075 [2024-11-20 06:43:37.691592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.075 [2024-11-20 06:43:37.703181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.075 [2024-11-20 06:43:37.703207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.075 [2024-11-20 06:43:37.715433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.715476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.725230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.725254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.736468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.736495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.747644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.747668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.758457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.758484] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.772788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.772813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.782188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.782213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.794004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.794030] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.810024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.810048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.819401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.819427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.831459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.831487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.842106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.842132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.857278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.857309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.866781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.866805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.882596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.882630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.898195] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.898222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.076 [2024-11-20 06:43:37.907852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.076 [2024-11-20 06:43:37.907880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:37.919267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:37.919313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:37.932150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:37.932178] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:37.941823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:37.941848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:37.953644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:37.953685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:37.968486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:37.968528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:37.977550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:37.977577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:37.989473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:37.989500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.005521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.005549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.014986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.015011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.028542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.028570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.037898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.037923] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.049716] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.049740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.065003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.065043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.074242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.074268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.089028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.089054] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.098727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.098752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 11711.25 IOPS, 91.49 MiB/s [2024-11-20T05:43:38.170Z] [2024-11-20 06:43:38.114471] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.114498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.131923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.131950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.141805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.141830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.334 [2024-11-20 06:43:38.153694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.334 [2024-11-20 06:43:38.153719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.169398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.169424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.178968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.178992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.192315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.192343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.201395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.201422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.213464] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.213490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.228623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.228666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.237866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.237892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.249809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.249835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.264601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.264643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.274275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.274310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.289067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.289092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.300001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.300040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.310831] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.310856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.325598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.325624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.334885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.334911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.592 [2024-11-20 06:43:38.351235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.592 [2024-11-20 06:43:38.351261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.593 [2024-11-20 06:43:38.361901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.593 [2024-11-20 06:43:38.361925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.593 [2024-11-20 06:43:38.377289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.593 [2024-11-20 06:43:38.377336] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.593 [2024-11-20 06:43:38.386662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.593 [2024-11-20 06:43:38.386689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.593 [2024-11-20 06:43:38.400630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.593 [2024-11-20 06:43:38.400656] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.593 [2024-11-20 06:43:38.409777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.593 [2024-11-20 06:43:38.409818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.593 [2024-11-20 06:43:38.421530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.593 [2024-11-20 06:43:38.421557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.437545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.437573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.447190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.447217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.459104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.459129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.469963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.469991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.485704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.485731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.495079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.495119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.507224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.507249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.518436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.518463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.533167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.533195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.542855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.542882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.558995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.559021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.572458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.572494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.581916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.581941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.593563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.593591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.608946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.608972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.618435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.618462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.630167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.630194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.645758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.645784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.655377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.655404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.667382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.667409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:06.851 [2024-11-20 06:43:38.681229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:06.851 [2024-11-20 06:43:38.681256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.690671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.690695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.705795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.705821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.715377] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.715418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.727098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.727123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.740169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.740196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.749554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.749580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.761371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.761397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.777036] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.777063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.786267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.786315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.802369] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.802402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.811918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.811944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.824058] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.824099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.834739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.834766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.848497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.848525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.857874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.857900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.869463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.869491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.886315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.886343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.901901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.901928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.911297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.911334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.923319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.923347] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.110 [2024-11-20 06:43:38.934530] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.110 [2024-11-20 06:43:38.934557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.368 [2024-11-20 06:43:38.949743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.368 [2024-11-20 06:43:38.949785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.368 [2024-11-20 06:43:38.959429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.368 [2024-11-20 06:43:38.959456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.368 [2024-11-20 06:43:38.971141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.368 [2024-11-20 06:43:38.971183] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.368 [2024-11-20 06:43:38.983764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.368 [2024-11-20 06:43:38.983792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.368 [2024-11-20 06:43:38.993483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.368 [2024-11-20 06:43:38.993511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.368 [2024-11-20 06:43:39.005445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.368 [2024-11-20 06:43:39.005472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.368 [2024-11-20 06:43:39.021016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.368 [2024-11-20 06:43:39.021043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.368 [2024-11-20 06:43:39.030523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.368 [2024-11-20 06:43:39.030561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.368 [2024-11-20 06:43:39.043921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.368 [2024-11-20 06:43:39.043947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.053660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.053687] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.064930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.064956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.075379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.075407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.089900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.089943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.099379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.099407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.111167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.111193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 11708.20 IOPS, 91.47 MiB/s [2024-11-20T05:43:39.205Z] [2024-11-20 06:43:39.119864] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.119888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 00:33:07.369 Latency(us) 00:33:07.369 [2024-11-20T05:43:39.205Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:07.369 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:33:07.369 Nvme1n1 : 5.01 11710.13 91.49 0.00 0.00 10917.24 2888.44 18058.81 00:33:07.369 [2024-11-20T05:43:39.205Z] =================================================================================================================== 00:33:07.369 [2024-11-20T05:43:39.205Z] Total : 11710.13 91.49 0.00 0.00 10917.24 2888.44 18058.81 00:33:07.369 [2024-11-20 06:43:39.127825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.127849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.135826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.135850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.143849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.143877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.151900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.151948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.159902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.159952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.167901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.167943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.175900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.175947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.183903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.183949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.191902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.191949] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.369 [2024-11-20 06:43:39.199907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.369 [2024-11-20 06:43:39.199953] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.207909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.207955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.215901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.215948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.223904] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.223948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.231907] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.231954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.239902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.239950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.247903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.247946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.255897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.255944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.263900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.263948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.271896] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.271936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.279823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.279844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.287821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.287842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.295821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.295843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.303822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.303842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.311895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.311940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.319901] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.319943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.327897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.327938] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.335823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.335844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.343824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.343845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 [2024-11-20 06:43:39.351822] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:33:07.626 [2024-11-20 06:43:39.351843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:07.626 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2244177) - No such process 00:33:07.626 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2244177 00:33:07.626 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:33:07.626 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.626 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.627 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.627 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:33:07.627 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.627 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.627 delay0 00:33:07.627 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.627 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:33:07.627 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:07.627 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:07.627 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:07.627 06:43:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:33:07.627 [2024-11-20 06:43:39.436680] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:33:15.733 [2024-11-20 06:43:46.629438] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11a48a0 is same with the state(6) to be set 00:33:15.733 Initializing NVMe Controllers 00:33:15.733 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:33:15.733 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:33:15.733 Initialization complete. Launching workers. 00:33:15.733 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 227, failed: 26300 00:33:15.733 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 26385, failed to submit 142 00:33:15.733 success 26312, unsuccessful 73, failed 0 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:15.733 rmmod nvme_tcp 00:33:15.733 rmmod nvme_fabrics 00:33:15.733 rmmod nvme_keyring 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 2242966 ']' 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 2242966 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@952 -- # '[' -z 2242966 ']' 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@956 -- # kill -0 2242966 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # uname 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2242966 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2242966' 00:33:15.733 killing process with pid 2242966 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@971 -- # kill 2242966 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@976 -- # wait 2242966 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:15.733 06:43:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:17.636 00:33:17.636 real 0m28.616s 00:33:17.636 user 0m40.843s 00:33:17.636 sys 0m9.888s 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:17.636 ************************************ 00:33:17.636 END TEST nvmf_zcopy 00:33:17.636 ************************************ 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:17.636 ************************************ 00:33:17.636 START TEST nvmf_nmic 00:33:17.636 ************************************ 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:33:17.636 * Looking for test storage... 00:33:17.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lcov --version 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.636 --rc genhtml_branch_coverage=1 00:33:17.636 --rc genhtml_function_coverage=1 00:33:17.636 --rc genhtml_legend=1 00:33:17.636 --rc geninfo_all_blocks=1 00:33:17.636 --rc geninfo_unexecuted_blocks=1 00:33:17.636 00:33:17.636 ' 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.636 --rc genhtml_branch_coverage=1 00:33:17.636 --rc genhtml_function_coverage=1 00:33:17.636 --rc genhtml_legend=1 00:33:17.636 --rc geninfo_all_blocks=1 00:33:17.636 --rc geninfo_unexecuted_blocks=1 00:33:17.636 00:33:17.636 ' 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.636 --rc genhtml_branch_coverage=1 00:33:17.636 --rc genhtml_function_coverage=1 00:33:17.636 --rc genhtml_legend=1 00:33:17.636 --rc geninfo_all_blocks=1 00:33:17.636 --rc geninfo_unexecuted_blocks=1 00:33:17.636 00:33:17.636 ' 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:17.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.636 --rc genhtml_branch_coverage=1 00:33:17.636 --rc genhtml_function_coverage=1 00:33:17.636 --rc genhtml_legend=1 00:33:17.636 --rc geninfo_all_blocks=1 00:33:17.636 --rc geninfo_unexecuted_blocks=1 00:33:17.636 00:33:17.636 ' 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.636 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:33:17.637 06:43:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:20.171 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:20.171 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.171 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:20.171 Found net devices under 0000:09:00.0: cvl_0_0 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:20.172 Found net devices under 0000:09:00.1: cvl_0_1 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:20.172 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:20.172 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:33:20.172 00:33:20.172 --- 10.0.0.2 ping statistics --- 00:33:20.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.172 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:20.172 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:20.172 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:33:20.172 00:33:20.172 --- 10.0.0.1 ping statistics --- 00:33:20.172 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:20.172 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=2247683 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 2247683 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@833 -- # '[' -z 2247683 ']' 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:20.172 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.172 [2024-11-20 06:43:51.619880] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:20.172 [2024-11-20 06:43:51.620946] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:33:20.172 [2024-11-20 06:43:51.621002] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:20.172 [2024-11-20 06:43:51.691745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:20.172 [2024-11-20 06:43:51.752078] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:20.172 [2024-11-20 06:43:51.752126] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:20.172 [2024-11-20 06:43:51.752153] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:20.172 [2024-11-20 06:43:51.752165] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:20.172 [2024-11-20 06:43:51.752174] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:20.172 [2024-11-20 06:43:51.753744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:20.172 [2024-11-20 06:43:51.753810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:20.172 [2024-11-20 06:43:51.753879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:20.172 [2024-11-20 06:43:51.753883] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:20.172 [2024-11-20 06:43:51.845836] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:20.172 [2024-11-20 06:43:51.846050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:20.172 [2024-11-20 06:43:51.846355] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:20.172 [2024-11-20 06:43:51.847003] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:20.172 [2024-11-20 06:43:51.847233] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@866 -- # return 0 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.172 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.172 [2024-11-20 06:43:51.902508] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.173 Malloc0 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.173 [2024-11-20 06:43:51.974737] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:33:20.173 test case1: single bdev can't be used in multiple subsystems 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.173 06:43:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.173 [2024-11-20 06:43:51.998459] bdev.c:8321:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:33:20.173 [2024-11-20 06:43:51.998488] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:33:20.173 [2024-11-20 06:43:51.998503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:33:20.173 request: 00:33:20.173 { 00:33:20.173 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:33:20.173 "namespace": { 00:33:20.173 "bdev_name": "Malloc0", 00:33:20.173 "no_auto_visible": false 00:33:20.173 }, 00:33:20.173 "method": "nvmf_subsystem_add_ns", 00:33:20.173 "req_id": 1 00:33:20.173 } 00:33:20.173 Got JSON-RPC error response 00:33:20.173 response: 00:33:20.173 { 00:33:20.173 "code": -32602, 00:33:20.173 "message": "Invalid parameters" 00:33:20.173 } 00:33:20.173 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:33:20.173 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:33:20.173 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:33:20.173 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:33:20.173 Adding namespace failed - expected result. 00:33:20.173 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:33:20.173 test case2: host connect to nvmf target in multiple paths 00:33:20.431 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:20.431 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:20.431 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:20.431 [2024-11-20 06:43:52.006555] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:20.431 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:20.431 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:20.431 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:33:20.690 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:33:20.690 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1200 -- # local i=0 00:33:20.690 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:33:20.690 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:33:20.690 06:43:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1207 -- # sleep 2 00:33:23.213 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:33:23.213 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:33:23.213 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:33:23.213 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:33:23.213 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:33:23.213 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # return 0 00:33:23.213 06:43:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:23.213 [global] 00:33:23.213 thread=1 00:33:23.213 invalidate=1 00:33:23.213 rw=write 00:33:23.213 time_based=1 00:33:23.213 runtime=1 00:33:23.213 ioengine=libaio 00:33:23.213 direct=1 00:33:23.213 bs=4096 00:33:23.213 iodepth=1 00:33:23.213 norandommap=0 00:33:23.213 numjobs=1 00:33:23.213 00:33:23.213 verify_dump=1 00:33:23.213 verify_backlog=512 00:33:23.213 verify_state_save=0 00:33:23.213 do_verify=1 00:33:23.213 verify=crc32c-intel 00:33:23.213 [job0] 00:33:23.213 filename=/dev/nvme0n1 00:33:23.213 Could not set queue depth (nvme0n1) 00:33:23.213 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:23.213 fio-3.35 00:33:23.213 Starting 1 thread 00:33:24.146 00:33:24.146 job0: (groupid=0, jobs=1): err= 0: pid=2248182: Wed Nov 20 06:43:55 2024 00:33:24.146 read: IOPS=295, BW=1183KiB/s (1211kB/s)(1224KiB/1035msec) 00:33:24.146 slat (nsec): min=6294, max=53451, avg=15217.92, stdev=6831.16 00:33:24.146 clat (usec): min=215, max=42000, avg=2981.16, stdev=10269.60 00:33:24.146 lat (usec): min=224, max=42032, avg=2996.38, stdev=10272.43 00:33:24.146 clat percentiles (usec): 00:33:24.146 | 1.00th=[ 219], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 247], 00:33:24.146 | 30.00th=[ 249], 40.00th=[ 251], 50.00th=[ 253], 60.00th=[ 258], 00:33:24.146 | 70.00th=[ 262], 80.00th=[ 269], 90.00th=[ 482], 95.00th=[41681], 00:33:24.146 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:24.146 | 99.99th=[42206] 00:33:24.146 write: IOPS=494, BW=1979KiB/s (2026kB/s)(2048KiB/1035msec); 0 zone resets 00:33:24.146 slat (nsec): min=8006, max=61711, avg=18779.90, stdev=7954.35 00:33:24.146 clat (usec): min=138, max=269, avg=203.77, stdev=32.84 00:33:24.146 lat (usec): min=147, max=299, avg=222.55, stdev=32.77 00:33:24.146 clat percentiles (usec): 00:33:24.146 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 165], 20.00th=[ 174], 00:33:24.146 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 204], 60.00th=[ 219], 00:33:24.146 | 70.00th=[ 227], 80.00th=[ 231], 90.00th=[ 255], 95.00th=[ 260], 00:33:24.146 | 99.00th=[ 262], 99.50th=[ 262], 99.90th=[ 269], 99.95th=[ 269], 00:33:24.146 | 99.99th=[ 269] 00:33:24.146 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:33:24.146 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:24.146 lat (usec) : 250=68.09%, 500=28.61%, 750=0.86% 00:33:24.146 lat (msec) : 50=2.44% 00:33:24.146 cpu : usr=1.16%, sys=1.64%, ctx=818, majf=0, minf=1 00:33:24.146 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:24.146 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.146 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:24.146 issued rwts: total=306,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:24.146 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:24.146 00:33:24.146 Run status group 0 (all jobs): 00:33:24.146 READ: bw=1183KiB/s (1211kB/s), 1183KiB/s-1183KiB/s (1211kB/s-1211kB/s), io=1224KiB (1253kB), run=1035-1035msec 00:33:24.146 WRITE: bw=1979KiB/s (2026kB/s), 1979KiB/s-1979KiB/s (2026kB/s-2026kB/s), io=2048KiB (2097kB), run=1035-1035msec 00:33:24.146 00:33:24.146 Disk stats (read/write): 00:33:24.146 nvme0n1: ios=352/512, merge=0/0, ticks=754/85, in_queue=839, util=91.38% 00:33:24.146 06:43:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:24.403 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:33:24.403 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:24.403 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1221 -- # local i=0 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1233 -- # return 0 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:24.404 rmmod nvme_tcp 00:33:24.404 rmmod nvme_fabrics 00:33:24.404 rmmod nvme_keyring 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 2247683 ']' 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 2247683 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@952 -- # '[' -z 2247683 ']' 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@956 -- # kill -0 2247683 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # uname 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2247683 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2247683' 00:33:24.404 killing process with pid 2247683 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@971 -- # kill 2247683 00:33:24.404 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@976 -- # wait 2247683 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:24.663 06:43:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:27.198 00:33:27.198 real 0m9.377s 00:33:27.198 user 0m17.787s 00:33:27.198 sys 0m3.375s 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:33:27.198 ************************************ 00:33:27.198 END TEST nvmf_nmic 00:33:27.198 ************************************ 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:27.198 ************************************ 00:33:27.198 START TEST nvmf_fio_target 00:33:27.198 ************************************ 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:33:27.198 * Looking for test storage... 00:33:27.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lcov --version 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:27.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.198 --rc genhtml_branch_coverage=1 00:33:27.198 --rc genhtml_function_coverage=1 00:33:27.198 --rc genhtml_legend=1 00:33:27.198 --rc geninfo_all_blocks=1 00:33:27.198 --rc geninfo_unexecuted_blocks=1 00:33:27.198 00:33:27.198 ' 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:27.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.198 --rc genhtml_branch_coverage=1 00:33:27.198 --rc genhtml_function_coverage=1 00:33:27.198 --rc genhtml_legend=1 00:33:27.198 --rc geninfo_all_blocks=1 00:33:27.198 --rc geninfo_unexecuted_blocks=1 00:33:27.198 00:33:27.198 ' 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:27.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.198 --rc genhtml_branch_coverage=1 00:33:27.198 --rc genhtml_function_coverage=1 00:33:27.198 --rc genhtml_legend=1 00:33:27.198 --rc geninfo_all_blocks=1 00:33:27.198 --rc geninfo_unexecuted_blocks=1 00:33:27.198 00:33:27.198 ' 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:27.198 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:27.198 --rc genhtml_branch_coverage=1 00:33:27.198 --rc genhtml_function_coverage=1 00:33:27.198 --rc genhtml_legend=1 00:33:27.198 --rc geninfo_all_blocks=1 00:33:27.198 --rc geninfo_unexecuted_blocks=1 00:33:27.198 00:33:27.198 ' 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:27.198 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:33:27.199 06:43:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:29.101 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:29.102 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:29.102 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:29.102 Found net devices under 0000:09:00.0: cvl_0_0 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:29.102 Found net devices under 0000:09:00.1: cvl_0_1 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:29.102 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:29.360 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:29.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:29.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:33:29.361 00:33:29.361 --- 10.0.0.2 ping statistics --- 00:33:29.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.361 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:29.361 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:29.361 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.052 ms 00:33:29.361 00:33:29.361 --- 10.0.0.1 ping statistics --- 00:33:29.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:29.361 rtt min/avg/max/mdev = 0.052/0.052/0.052/0.000 ms 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=2250259 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 2250259 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@833 -- # '[' -z 2250259 ']' 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:29.361 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:29.361 06:44:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.361 [2024-11-20 06:44:01.025435] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:29.361 [2024-11-20 06:44:01.026504] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:33:29.361 [2024-11-20 06:44:01.026568] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:29.361 [2024-11-20 06:44:01.096731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:29.361 [2024-11-20 06:44:01.151592] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:29.361 [2024-11-20 06:44:01.151644] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:29.361 [2024-11-20 06:44:01.151672] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:29.361 [2024-11-20 06:44:01.151683] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:29.361 [2024-11-20 06:44:01.151692] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:29.361 [2024-11-20 06:44:01.153258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.361 [2024-11-20 06:44:01.153332] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:29.361 [2024-11-20 06:44:01.153361] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:29.361 [2024-11-20 06:44:01.153365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:29.618 [2024-11-20 06:44:01.240928] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:29.618 [2024-11-20 06:44:01.241131] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:29.618 [2024-11-20 06:44:01.241423] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:29.619 [2024-11-20 06:44:01.242063] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:29.619 [2024-11-20 06:44:01.242312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:29.619 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:29.619 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@866 -- # return 0 00:33:29.619 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:29.619 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:29.619 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:29.619 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:29.619 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:29.876 [2024-11-20 06:44:01.558126] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:29.876 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:30.133 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:33:30.133 06:44:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:30.390 06:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:33:30.390 06:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:30.648 06:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:33:30.648 06:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:31.212 06:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:33:31.212 06:44:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:33:31.469 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:31.727 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:33:31.727 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:31.984 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:33:31.984 06:44:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:33:32.241 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:33:32.241 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:33:32.805 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:33:33.063 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:33.063 06:44:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:33.320 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:33:33.320 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:33:33.577 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:33.835 [2024-11-20 06:44:05.522259] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:33.835 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:33:34.122 06:44:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:33:34.405 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:33:34.663 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:33:34.663 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1200 -- # local i=0 00:33:34.663 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:33:34.663 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # [[ -n 4 ]] 00:33:34.663 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # nvme_device_counter=4 00:33:34.663 06:44:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1207 -- # sleep 2 00:33:36.561 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:33:36.561 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:33:36.562 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:33:36.562 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # nvme_devices=4 00:33:36.562 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:33:36.562 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # return 0 00:33:36.562 06:44:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:33:36.562 [global] 00:33:36.562 thread=1 00:33:36.562 invalidate=1 00:33:36.562 rw=write 00:33:36.562 time_based=1 00:33:36.562 runtime=1 00:33:36.562 ioengine=libaio 00:33:36.562 direct=1 00:33:36.562 bs=4096 00:33:36.562 iodepth=1 00:33:36.562 norandommap=0 00:33:36.562 numjobs=1 00:33:36.562 00:33:36.562 verify_dump=1 00:33:36.562 verify_backlog=512 00:33:36.562 verify_state_save=0 00:33:36.562 do_verify=1 00:33:36.562 verify=crc32c-intel 00:33:36.562 [job0] 00:33:36.562 filename=/dev/nvme0n1 00:33:36.562 [job1] 00:33:36.562 filename=/dev/nvme0n2 00:33:36.562 [job2] 00:33:36.562 filename=/dev/nvme0n3 00:33:36.562 [job3] 00:33:36.562 filename=/dev/nvme0n4 00:33:36.562 Could not set queue depth (nvme0n1) 00:33:36.562 Could not set queue depth (nvme0n2) 00:33:36.562 Could not set queue depth (nvme0n3) 00:33:36.562 Could not set queue depth (nvme0n4) 00:33:36.819 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:36.819 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:36.819 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:36.819 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:36.819 fio-3.35 00:33:36.819 Starting 4 threads 00:33:38.191 00:33:38.191 job0: (groupid=0, jobs=1): err= 0: pid=2251330: Wed Nov 20 06:44:09 2024 00:33:38.191 read: IOPS=510, BW=2042KiB/s (2091kB/s)(2120KiB/1038msec) 00:33:38.191 slat (nsec): min=9082, max=32924, avg=12143.78, stdev=3424.40 00:33:38.191 clat (usec): min=208, max=41019, avg=1553.79, stdev=7187.37 00:33:38.191 lat (usec): min=218, max=41035, avg=1565.93, stdev=7189.45 00:33:38.191 clat percentiles (usec): 00:33:38.191 | 1.00th=[ 212], 5.00th=[ 217], 10.00th=[ 219], 20.00th=[ 223], 00:33:38.191 | 30.00th=[ 225], 40.00th=[ 227], 50.00th=[ 229], 60.00th=[ 233], 00:33:38.191 | 70.00th=[ 235], 80.00th=[ 241], 90.00th=[ 247], 95.00th=[ 269], 00:33:38.191 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:38.191 | 99.99th=[41157] 00:33:38.191 write: IOPS=986, BW=3946KiB/s (4041kB/s)(4096KiB/1038msec); 0 zone resets 00:33:38.191 slat (nsec): min=5519, max=56772, avg=14500.56, stdev=5332.60 00:33:38.191 clat (usec): min=146, max=583, avg=182.25, stdev=26.53 00:33:38.191 lat (usec): min=152, max=600, avg=196.75, stdev=28.37 00:33:38.191 clat percentiles (usec): 00:33:38.191 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 163], 20.00th=[ 167], 00:33:38.191 | 30.00th=[ 172], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:33:38.191 | 70.00th=[ 186], 80.00th=[ 196], 90.00th=[ 204], 95.00th=[ 210], 00:33:38.191 | 99.00th=[ 318], 99.50th=[ 330], 99.90th=[ 343], 99.95th=[ 586], 00:33:38.191 | 99.99th=[ 586] 00:33:38.191 bw ( KiB/s): min= 8192, max= 8192, per=36.76%, avg=8192.00, stdev= 0.00, samples=1 00:33:38.191 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:38.191 lat (usec) : 250=95.69%, 500=3.09%, 750=0.06% 00:33:38.191 lat (msec) : 10=0.06%, 50=1.09% 00:33:38.191 cpu : usr=1.16%, sys=2.12%, ctx=1555, majf=0, minf=2 00:33:38.191 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:38.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.191 issued rwts: total=530,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.191 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:38.191 job1: (groupid=0, jobs=1): err= 0: pid=2251331: Wed Nov 20 06:44:09 2024 00:33:38.191 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:33:38.191 slat (nsec): min=5680, max=60106, avg=13755.14, stdev=5476.97 00:33:38.191 clat (usec): min=194, max=1082, avg=246.40, stdev=29.10 00:33:38.191 lat (usec): min=205, max=1099, avg=260.16, stdev=30.62 00:33:38.191 clat percentiles (usec): 00:33:38.191 | 1.00th=[ 204], 5.00th=[ 219], 10.00th=[ 225], 20.00th=[ 231], 00:33:38.191 | 30.00th=[ 235], 40.00th=[ 239], 50.00th=[ 241], 60.00th=[ 245], 00:33:38.191 | 70.00th=[ 249], 80.00th=[ 269], 90.00th=[ 273], 95.00th=[ 285], 00:33:38.191 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 371], 99.95th=[ 388], 00:33:38.191 | 99.99th=[ 1090] 00:33:38.191 write: IOPS=2196, BW=8787KiB/s (8998kB/s)(8796KiB/1001msec); 0 zone resets 00:33:38.191 slat (nsec): min=5997, max=48502, avg=14779.17, stdev=5301.07 00:33:38.191 clat (usec): min=146, max=519, avg=189.79, stdev=23.65 00:33:38.191 lat (usec): min=160, max=526, avg=204.57, stdev=25.71 00:33:38.191 clat percentiles (usec): 00:33:38.191 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 172], 20.00th=[ 178], 00:33:38.191 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:33:38.191 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 225], 00:33:38.191 | 99.00th=[ 297], 99.50th=[ 310], 99.90th=[ 375], 99.95th=[ 388], 00:33:38.191 | 99.99th=[ 519] 00:33:38.191 bw ( KiB/s): min= 8192, max= 8192, per=36.76%, avg=8192.00, stdev= 0.00, samples=1 00:33:38.191 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:38.191 lat (usec) : 250=84.18%, 500=15.78%, 750=0.02% 00:33:38.192 lat (msec) : 2=0.02% 00:33:38.192 cpu : usr=4.10%, sys=6.10%, ctx=4250, majf=0, minf=1 00:33:38.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:38.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.192 issued rwts: total=2048,2199,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:38.192 job2: (groupid=0, jobs=1): err= 0: pid=2251332: Wed Nov 20 06:44:09 2024 00:33:38.192 read: IOPS=1513, BW=6055KiB/s (6200kB/s)(6164KiB/1018msec) 00:33:38.192 slat (nsec): min=4934, max=26619, avg=7320.57, stdev=2811.04 00:33:38.192 clat (usec): min=211, max=41011, avg=383.48, stdev=2310.64 00:33:38.192 lat (usec): min=217, max=41026, avg=390.80, stdev=2311.01 00:33:38.192 clat percentiles (usec): 00:33:38.192 | 1.00th=[ 223], 5.00th=[ 227], 10.00th=[ 229], 20.00th=[ 233], 00:33:38.192 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 243], 60.00th=[ 249], 00:33:38.192 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 285], 95.00th=[ 310], 00:33:38.192 | 99.00th=[ 363], 99.50th=[ 453], 99.90th=[41157], 99.95th=[41157], 00:33:38.192 | 99.99th=[41157] 00:33:38.192 write: IOPS=2011, BW=8047KiB/s (8240kB/s)(8192KiB/1018msec); 0 zone resets 00:33:38.192 slat (nsec): min=6415, max=38984, avg=8326.37, stdev=1711.26 00:33:38.192 clat (usec): min=152, max=768, avg=190.62, stdev=32.62 00:33:38.192 lat (usec): min=159, max=777, avg=198.95, stdev=33.12 00:33:38.192 clat percentiles (usec): 00:33:38.192 | 1.00th=[ 157], 5.00th=[ 161], 10.00th=[ 165], 20.00th=[ 169], 00:33:38.192 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 186], 00:33:38.192 | 70.00th=[ 190], 80.00th=[ 202], 90.00th=[ 245], 95.00th=[ 253], 00:33:38.192 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 355], 99.95th=[ 486], 00:33:38.192 | 99.99th=[ 766] 00:33:38.192 bw ( KiB/s): min= 8048, max= 8336, per=36.76%, avg=8192.00, stdev=203.65, samples=2 00:33:38.192 iops : min= 2012, max= 2084, avg=2048.00, stdev=50.91, samples=2 00:33:38.192 lat (usec) : 250=80.50%, 500=19.34%, 1000=0.03% 00:33:38.192 lat (msec) : 50=0.14% 00:33:38.192 cpu : usr=1.57%, sys=2.65%, ctx=3591, majf=0, minf=1 00:33:38.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:38.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.192 issued rwts: total=1541,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:38.192 job3: (groupid=0, jobs=1): err= 0: pid=2251333: Wed Nov 20 06:44:09 2024 00:33:38.192 read: IOPS=30, BW=122KiB/s (125kB/s)(124KiB/1016msec) 00:33:38.192 slat (nsec): min=8644, max=37057, avg=22134.45, stdev=10454.10 00:33:38.192 clat (usec): min=269, max=41103, avg=28376.85, stdev=18758.23 00:33:38.192 lat (usec): min=278, max=41117, avg=28398.98, stdev=18764.29 00:33:38.192 clat percentiles (usec): 00:33:38.192 | 1.00th=[ 269], 5.00th=[ 285], 10.00th=[ 326], 20.00th=[ 330], 00:33:38.192 | 30.00th=[16188], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:33:38.192 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:38.192 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:38.192 | 99.99th=[41157] 00:33:38.192 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:33:38.192 slat (nsec): min=5896, max=51612, avg=19250.76, stdev=7089.29 00:33:38.192 clat (usec): min=181, max=1010, avg=239.32, stdev=51.20 00:33:38.192 lat (usec): min=188, max=1031, avg=258.57, stdev=53.53 00:33:38.192 clat percentiles (usec): 00:33:38.192 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 212], 00:33:38.192 | 30.00th=[ 221], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 237], 00:33:38.192 | 70.00th=[ 241], 80.00th=[ 253], 90.00th=[ 289], 95.00th=[ 314], 00:33:38.192 | 99.00th=[ 404], 99.50th=[ 429], 99.90th=[ 1012], 99.95th=[ 1012], 00:33:38.192 | 99.99th=[ 1012] 00:33:38.192 bw ( KiB/s): min= 4096, max= 4096, per=18.38%, avg=4096.00, stdev= 0.00, samples=1 00:33:38.192 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:38.192 lat (usec) : 250=73.66%, 500=21.92%, 750=0.18% 00:33:38.192 lat (msec) : 2=0.18%, 20=0.18%, 50=3.87% 00:33:38.192 cpu : usr=0.69%, sys=1.18%, ctx=543, majf=0, minf=1 00:33:38.192 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:38.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:38.192 issued rwts: total=31,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:38.192 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:38.192 00:33:38.192 Run status group 0 (all jobs): 00:33:38.192 READ: bw=15.6MiB/s (16.4MB/s), 122KiB/s-8184KiB/s (125kB/s-8380kB/s), io=16.2MiB (17.0MB), run=1001-1038msec 00:33:38.192 WRITE: bw=21.8MiB/s (22.8MB/s), 2016KiB/s-8787KiB/s (2064kB/s-8998kB/s), io=22.6MiB (23.7MB), run=1001-1038msec 00:33:38.192 00:33:38.192 Disk stats (read/write): 00:33:38.192 nvme0n1: ios=575/1024, merge=0/0, ticks=645/182, in_queue=827, util=86.47% 00:33:38.192 nvme0n2: ios=1601/2048, merge=0/0, ticks=1283/380, in_queue=1663, util=89.11% 00:33:38.192 nvme0n3: ios=1600/2048, merge=0/0, ticks=716/385, in_queue=1101, util=94.76% 00:33:38.192 nvme0n4: ios=83/512, merge=0/0, ticks=752/122, in_queue=874, util=95.67% 00:33:38.192 06:44:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:33:38.192 [global] 00:33:38.192 thread=1 00:33:38.192 invalidate=1 00:33:38.192 rw=randwrite 00:33:38.192 time_based=1 00:33:38.192 runtime=1 00:33:38.192 ioengine=libaio 00:33:38.192 direct=1 00:33:38.192 bs=4096 00:33:38.192 iodepth=1 00:33:38.192 norandommap=0 00:33:38.192 numjobs=1 00:33:38.192 00:33:38.192 verify_dump=1 00:33:38.192 verify_backlog=512 00:33:38.192 verify_state_save=0 00:33:38.192 do_verify=1 00:33:38.192 verify=crc32c-intel 00:33:38.192 [job0] 00:33:38.192 filename=/dev/nvme0n1 00:33:38.192 [job1] 00:33:38.192 filename=/dev/nvme0n2 00:33:38.192 [job2] 00:33:38.192 filename=/dev/nvme0n3 00:33:38.192 [job3] 00:33:38.192 filename=/dev/nvme0n4 00:33:38.192 Could not set queue depth (nvme0n1) 00:33:38.192 Could not set queue depth (nvme0n2) 00:33:38.192 Could not set queue depth (nvme0n3) 00:33:38.192 Could not set queue depth (nvme0n4) 00:33:38.449 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:38.449 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:38.449 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:38.449 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:38.449 fio-3.35 00:33:38.449 Starting 4 threads 00:33:39.821 00:33:39.821 job0: (groupid=0, jobs=1): err= 0: pid=2251559: Wed Nov 20 06:44:11 2024 00:33:39.821 read: IOPS=569, BW=2279KiB/s (2334kB/s)(2300KiB/1009msec) 00:33:39.821 slat (nsec): min=5868, max=39397, avg=8404.05, stdev=4171.94 00:33:39.821 clat (usec): min=214, max=41893, avg=1331.87, stdev=6495.46 00:33:39.821 lat (usec): min=221, max=41913, avg=1340.28, stdev=6496.30 00:33:39.821 clat percentiles (usec): 00:33:39.822 | 1.00th=[ 225], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 239], 00:33:39.822 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 251], 60.00th=[ 258], 00:33:39.822 | 70.00th=[ 265], 80.00th=[ 293], 90.00th=[ 359], 95.00th=[ 408], 00:33:39.822 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:33:39.822 | 99.99th=[41681] 00:33:39.822 write: IOPS=1014, BW=4059KiB/s (4157kB/s)(4096KiB/1009msec); 0 zone resets 00:33:39.822 slat (nsec): min=5895, max=34762, avg=8691.43, stdev=2643.79 00:33:39.822 clat (usec): min=145, max=413, avg=220.09, stdev=50.21 00:33:39.822 lat (usec): min=152, max=422, avg=228.79, stdev=51.12 00:33:39.822 clat percentiles (usec): 00:33:39.822 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 159], 20.00th=[ 163], 00:33:39.822 | 30.00th=[ 172], 40.00th=[ 190], 50.00th=[ 241], 60.00th=[ 251], 00:33:39.822 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 285], 00:33:39.822 | 99.00th=[ 322], 99.50th=[ 338], 99.90th=[ 400], 99.95th=[ 412], 00:33:39.822 | 99.99th=[ 412] 00:33:39.822 bw ( KiB/s): min= 8192, max= 8192, per=41.52%, avg=8192.00, stdev= 0.00, samples=1 00:33:39.822 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:39.822 lat (usec) : 250=55.22%, 500=43.46%, 750=0.31%, 1000=0.06% 00:33:39.822 lat (msec) : 50=0.94% 00:33:39.822 cpu : usr=0.60%, sys=1.98%, ctx=1600, majf=0, minf=1 00:33:39.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.822 issued rwts: total=575,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:39.822 job1: (groupid=0, jobs=1): err= 0: pid=2251560: Wed Nov 20 06:44:11 2024 00:33:39.822 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:33:39.822 slat (nsec): min=6726, max=34444, avg=16392.45, stdev=7352.74 00:33:39.822 clat (usec): min=40891, max=41106, avg=40976.39, stdev=56.45 00:33:39.822 lat (usec): min=40912, max=41119, avg=40992.79, stdev=56.56 00:33:39.822 clat percentiles (usec): 00:33:39.822 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:33:39.822 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:39.822 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:33:39.822 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:39.822 | 99.99th=[41157] 00:33:39.822 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:33:39.822 slat (nsec): min=5861, max=31601, avg=7458.12, stdev=2776.12 00:33:39.822 clat (usec): min=201, max=458, avg=246.43, stdev=15.56 00:33:39.822 lat (usec): min=217, max=465, avg=253.89, stdev=15.73 00:33:39.822 clat percentiles (usec): 00:33:39.822 | 1.00th=[ 223], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 243], 00:33:39.822 | 30.00th=[ 245], 40.00th=[ 245], 50.00th=[ 245], 60.00th=[ 247], 00:33:39.822 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 255], 00:33:39.822 | 99.00th=[ 269], 99.50th=[ 404], 99.90th=[ 457], 99.95th=[ 457], 00:33:39.822 | 99.99th=[ 457] 00:33:39.822 bw ( KiB/s): min= 4096, max= 4096, per=20.76%, avg=4096.00, stdev= 0.00, samples=1 00:33:39.822 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:33:39.822 lat (usec) : 250=85.39%, 500=10.49% 00:33:39.822 lat (msec) : 50=4.12% 00:33:39.822 cpu : usr=0.10%, sys=0.39%, ctx=535, majf=0, minf=1 00:33:39.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.822 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:39.822 job2: (groupid=0, jobs=1): err= 0: pid=2251561: Wed Nov 20 06:44:11 2024 00:33:39.822 read: IOPS=2028, BW=8116KiB/s (8311kB/s)(8124KiB/1001msec) 00:33:39.822 slat (nsec): min=4682, max=49975, avg=8494.31, stdev=4852.54 00:33:39.822 clat (usec): min=199, max=534, avg=266.52, stdev=44.74 00:33:39.822 lat (usec): min=205, max=548, avg=275.01, stdev=46.52 00:33:39.822 clat percentiles (usec): 00:33:39.822 | 1.00th=[ 229], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 247], 00:33:39.822 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 260], 00:33:39.822 | 70.00th=[ 265], 80.00th=[ 269], 90.00th=[ 281], 95.00th=[ 351], 00:33:39.822 | 99.00th=[ 498], 99.50th=[ 506], 99.90th=[ 529], 99.95th=[ 529], 00:33:39.822 | 99.99th=[ 537] 00:33:39.822 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:33:39.822 slat (nsec): min=6106, max=34125, avg=7897.75, stdev=2525.66 00:33:39.822 clat (usec): min=163, max=531, avg=202.99, stdev=29.52 00:33:39.822 lat (usec): min=170, max=538, avg=210.89, stdev=29.81 00:33:39.822 clat percentiles (usec): 00:33:39.822 | 1.00th=[ 169], 5.00th=[ 174], 10.00th=[ 178], 20.00th=[ 182], 00:33:39.822 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 192], 60.00th=[ 198], 00:33:39.822 | 70.00th=[ 206], 80.00th=[ 231], 90.00th=[ 247], 95.00th=[ 255], 00:33:39.822 | 99.00th=[ 289], 99.50th=[ 318], 99.90th=[ 392], 99.95th=[ 396], 00:33:39.822 | 99.99th=[ 529] 00:33:39.822 bw ( KiB/s): min= 8192, max= 8192, per=41.52%, avg=8192.00, stdev= 0.00, samples=1 00:33:39.822 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:33:39.822 lat (usec) : 250=62.52%, 500=37.04%, 750=0.44% 00:33:39.822 cpu : usr=1.40%, sys=3.80%, ctx=4080, majf=0, minf=1 00:33:39.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.822 issued rwts: total=2031,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:39.822 job3: (groupid=0, jobs=1): err= 0: pid=2251562: Wed Nov 20 06:44:11 2024 00:33:39.822 read: IOPS=1140, BW=4563KiB/s (4672kB/s)(4736KiB/1038msec) 00:33:39.822 slat (nsec): min=4444, max=36543, avg=10606.88, stdev=5721.24 00:33:39.822 clat (usec): min=217, max=41206, avg=575.73, stdev=3336.11 00:33:39.822 lat (usec): min=223, max=41213, avg=586.34, stdev=3336.41 00:33:39.822 clat percentiles (usec): 00:33:39.822 | 1.00th=[ 237], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 258], 00:33:39.822 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 281], 60.00th=[ 297], 00:33:39.822 | 70.00th=[ 318], 80.00th=[ 363], 90.00th=[ 383], 95.00th=[ 396], 00:33:39.822 | 99.00th=[ 465], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:39.822 | 99.99th=[41157] 00:33:39.822 write: IOPS=1479, BW=5919KiB/s (6061kB/s)(6144KiB/1038msec); 0 zone resets 00:33:39.822 slat (nsec): min=5852, max=55199, avg=9275.32, stdev=5018.48 00:33:39.822 clat (usec): min=152, max=519, avg=209.58, stdev=41.71 00:33:39.822 lat (usec): min=160, max=527, avg=218.85, stdev=42.76 00:33:39.822 clat percentiles (usec): 00:33:39.822 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 186], 00:33:39.822 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 196], 60.00th=[ 202], 00:33:39.822 | 70.00th=[ 208], 80.00th=[ 221], 90.00th=[ 249], 95.00th=[ 273], 00:33:39.822 | 99.00th=[ 383], 99.50th=[ 392], 99.90th=[ 453], 99.95th=[ 519], 00:33:39.822 | 99.99th=[ 519] 00:33:39.822 bw ( KiB/s): min= 4096, max= 8192, per=31.14%, avg=6144.00, stdev=2896.31, samples=2 00:33:39.822 iops : min= 1024, max= 2048, avg=1536.00, stdev=724.08, samples=2 00:33:39.822 lat (usec) : 250=55.33%, 500=44.34%, 750=0.04% 00:33:39.822 lat (msec) : 50=0.29% 00:33:39.822 cpu : usr=1.64%, sys=2.70%, ctx=2721, majf=0, minf=1 00:33:39.822 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:39.822 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.822 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:39.822 issued rwts: total=1184,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:39.822 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:39.822 00:33:39.822 Run status group 0 (all jobs): 00:33:39.822 READ: bw=14.3MiB/s (15.0MB/s), 85.2KiB/s-8116KiB/s (87.2kB/s-8311kB/s), io=14.9MiB (15.6MB), run=1001-1038msec 00:33:39.822 WRITE: bw=19.3MiB/s (20.2MB/s), 1983KiB/s-8184KiB/s (2030kB/s-8380kB/s), io=20.0MiB (21.0MB), run=1001-1038msec 00:33:39.822 00:33:39.822 Disk stats (read/write): 00:33:39.822 nvme0n1: ios=562/1024, merge=0/0, ticks=889/215, in_queue=1104, util=96.79% 00:33:39.822 nvme0n2: ios=43/512, merge=0/0, ticks=1682/126, in_queue=1808, util=98.07% 00:33:39.822 nvme0n3: ios=1586/2030, merge=0/0, ticks=705/395, in_queue=1100, util=99.58% 00:33:39.822 nvme0n4: ios=1073/1306, merge=0/0, ticks=1611/271, in_queue=1882, util=96.64% 00:33:39.822 06:44:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:33:39.822 [global] 00:33:39.822 thread=1 00:33:39.822 invalidate=1 00:33:39.822 rw=write 00:33:39.822 time_based=1 00:33:39.822 runtime=1 00:33:39.822 ioengine=libaio 00:33:39.822 direct=1 00:33:39.822 bs=4096 00:33:39.822 iodepth=128 00:33:39.822 norandommap=0 00:33:39.822 numjobs=1 00:33:39.822 00:33:39.822 verify_dump=1 00:33:39.822 verify_backlog=512 00:33:39.822 verify_state_save=0 00:33:39.822 do_verify=1 00:33:39.822 verify=crc32c-intel 00:33:39.822 [job0] 00:33:39.822 filename=/dev/nvme0n1 00:33:39.822 [job1] 00:33:39.822 filename=/dev/nvme0n2 00:33:39.822 [job2] 00:33:39.822 filename=/dev/nvme0n3 00:33:39.822 [job3] 00:33:39.822 filename=/dev/nvme0n4 00:33:39.822 Could not set queue depth (nvme0n1) 00:33:39.822 Could not set queue depth (nvme0n2) 00:33:39.822 Could not set queue depth (nvme0n3) 00:33:39.822 Could not set queue depth (nvme0n4) 00:33:39.822 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:39.822 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:39.822 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:39.822 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:39.822 fio-3.35 00:33:39.822 Starting 4 threads 00:33:41.196 00:33:41.196 job0: (groupid=0, jobs=1): err= 0: pid=2251783: Wed Nov 20 06:44:12 2024 00:33:41.196 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:33:41.196 slat (usec): min=2, max=11131, avg=89.43, stdev=726.20 00:33:41.196 clat (usec): min=3651, max=21637, avg=11862.14, stdev=2743.33 00:33:41.196 lat (usec): min=7982, max=23210, avg=11951.57, stdev=2808.41 00:33:41.196 clat percentiles (usec): 00:33:41.196 | 1.00th=[ 8029], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9765], 00:33:41.196 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:33:41.196 | 70.00th=[12387], 80.00th=[13304], 90.00th=[16188], 95.00th=[17957], 00:33:41.196 | 99.00th=[20055], 99.50th=[20841], 99.90th=[21103], 99.95th=[21627], 00:33:41.196 | 99.99th=[21627] 00:33:41.196 write: IOPS=5490, BW=21.4MiB/s (22.5MB/s)(21.6MiB/1005msec); 0 zone resets 00:33:41.196 slat (usec): min=4, max=30851, avg=91.36, stdev=817.48 00:33:41.196 clat (usec): min=1295, max=23189, avg=11237.75, stdev=2728.57 00:33:41.196 lat (usec): min=1320, max=38314, avg=11329.11, stdev=2795.46 00:33:41.196 clat percentiles (usec): 00:33:41.196 | 1.00th=[ 4621], 5.00th=[ 6718], 10.00th=[ 7701], 20.00th=[ 9110], 00:33:41.196 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11076], 60.00th=[11731], 00:33:41.196 | 70.00th=[12518], 80.00th=[13042], 90.00th=[15139], 95.00th=[16450], 00:33:41.196 | 99.00th=[17695], 99.50th=[18482], 99.90th=[22414], 99.95th=[22414], 00:33:41.196 | 99.99th=[23200] 00:33:41.196 bw ( KiB/s): min=20528, max=22592, per=31.60%, avg=21560.00, stdev=1459.47, samples=2 00:33:41.196 iops : min= 5132, max= 5648, avg=5390.00, stdev=364.87, samples=2 00:33:41.196 lat (msec) : 2=0.04%, 4=0.15%, 10=25.06%, 20=74.09%, 50=0.66% 00:33:41.196 cpu : usr=4.28%, sys=8.86%, ctx=251, majf=0, minf=1 00:33:41.196 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:41.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:41.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:41.196 issued rwts: total=5120,5518,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:41.196 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:41.196 job1: (groupid=0, jobs=1): err= 0: pid=2251789: Wed Nov 20 06:44:12 2024 00:33:41.196 read: IOPS=3576, BW=14.0MiB/s (14.6MB/s)(14.1MiB/1010msec) 00:33:41.196 slat (usec): min=2, max=22509, avg=129.48, stdev=1181.75 00:33:41.196 clat (usec): min=3324, max=55126, avg=17424.11, stdev=7117.60 00:33:41.196 lat (usec): min=5720, max=72437, avg=17553.59, stdev=7245.61 00:33:41.196 clat percentiles (usec): 00:33:41.196 | 1.00th=[ 7111], 5.00th=[ 8029], 10.00th=[ 9372], 20.00th=[10683], 00:33:41.196 | 30.00th=[12518], 40.00th=[13960], 50.00th=[16450], 60.00th=[19268], 00:33:41.196 | 70.00th=[21627], 80.00th=[23987], 90.00th=[25560], 95.00th=[26870], 00:33:41.196 | 99.00th=[37487], 99.50th=[50070], 99.90th=[52691], 99.95th=[53216], 00:33:41.196 | 99.99th=[55313] 00:33:41.196 write: IOPS=4055, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1010msec); 0 zone resets 00:33:41.196 slat (usec): min=3, max=26125, avg=110.97, stdev=1047.44 00:33:41.196 clat (usec): min=240, max=47201, avg=15893.34, stdev=6797.55 00:33:41.196 lat (usec): min=260, max=47221, avg=16004.31, stdev=6896.81 00:33:41.196 clat percentiles (usec): 00:33:41.196 | 1.00th=[ 1532], 5.00th=[ 5800], 10.00th=[ 7963], 20.00th=[11600], 00:33:41.197 | 30.00th=[12780], 40.00th=[13435], 50.00th=[13829], 60.00th=[14484], 00:33:41.197 | 70.00th=[19792], 80.00th=[23462], 90.00th=[24249], 95.00th=[25822], 00:33:41.197 | 99.00th=[34341], 99.50th=[34341], 99.90th=[44303], 99.95th=[44303], 00:33:41.197 | 99.99th=[47449] 00:33:41.197 bw ( KiB/s): min=15024, max=16944, per=23.43%, avg=15984.00, stdev=1357.65, samples=2 00:33:41.197 iops : min= 3756, max= 4236, avg=3996.00, stdev=339.41, samples=2 00:33:41.197 lat (usec) : 250=0.01%, 500=0.01%, 750=0.04%, 1000=0.12% 00:33:41.197 lat (msec) : 2=0.65%, 4=0.62%, 10=14.36%, 20=50.19%, 50=33.81% 00:33:41.197 lat (msec) : 100=0.18% 00:33:41.197 cpu : usr=2.48%, sys=7.63%, ctx=289, majf=0, minf=1 00:33:41.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:33:41.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:41.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:41.197 issued rwts: total=3612,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:41.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:41.197 job2: (groupid=0, jobs=1): err= 0: pid=2251790: Wed Nov 20 06:44:12 2024 00:33:41.197 read: IOPS=3306, BW=12.9MiB/s (13.5MB/s)(13.0MiB/1009msec) 00:33:41.197 slat (usec): min=2, max=27699, avg=133.28, stdev=1357.53 00:33:41.197 clat (usec): min=2173, max=51386, avg=19214.85, stdev=9544.87 00:33:41.197 lat (usec): min=3149, max=51400, avg=19348.13, stdev=9632.95 00:33:41.197 clat percentiles (usec): 00:33:41.197 | 1.00th=[ 3654], 5.00th=[ 4621], 10.00th=[ 9110], 20.00th=[12256], 00:33:41.197 | 30.00th=[13173], 40.00th=[13566], 50.00th=[15270], 60.00th=[20317], 00:33:41.197 | 70.00th=[24249], 80.00th=[30278], 90.00th=[33817], 95.00th=[35390], 00:33:41.197 | 99.00th=[45351], 99.50th=[45351], 99.90th=[46400], 99.95th=[50070], 00:33:41.197 | 99.99th=[51643] 00:33:41.197 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:33:41.197 slat (usec): min=3, max=50679, avg=110.60, stdev=1309.35 00:33:41.197 clat (usec): min=956, max=64574, avg=15115.97, stdev=6896.16 00:33:41.197 lat (usec): min=962, max=88634, avg=15226.57, stdev=7080.95 00:33:41.197 clat percentiles (usec): 00:33:41.197 | 1.00th=[ 1221], 5.00th=[ 5342], 10.00th=[ 7504], 20.00th=[10290], 00:33:41.197 | 30.00th=[11469], 40.00th=[12911], 50.00th=[13829], 60.00th=[14222], 00:33:41.197 | 70.00th=[15533], 80.00th=[21627], 90.00th=[24511], 95.00th=[26346], 00:33:41.197 | 99.00th=[40633], 99.50th=[40633], 99.90th=[40633], 99.95th=[44303], 00:33:41.197 | 99.99th=[64750] 00:33:41.197 bw ( KiB/s): min=12416, max=16256, per=21.01%, avg=14336.00, stdev=2715.29, samples=2 00:33:41.197 iops : min= 3104, max= 4064, avg=3584.00, stdev=678.82, samples=2 00:33:41.197 lat (usec) : 1000=0.10% 00:33:41.197 lat (msec) : 2=1.04%, 4=1.43%, 10=14.42%, 20=50.71%, 50=32.27% 00:33:41.197 lat (msec) : 100=0.03% 00:33:41.197 cpu : usr=2.18%, sys=3.87%, ctx=236, majf=0, minf=1 00:33:41.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:33:41.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:41.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:41.197 issued rwts: total=3336,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:41.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:41.197 job3: (groupid=0, jobs=1): err= 0: pid=2251791: Wed Nov 20 06:44:12 2024 00:33:41.197 read: IOPS=3978, BW=15.5MiB/s (16.3MB/s)(16.2MiB/1044msec) 00:33:41.197 slat (usec): min=3, max=6547, avg=107.37, stdev=708.80 00:33:41.197 clat (usec): min=9365, max=48018, avg=14930.65, stdev=4527.26 00:33:41.197 lat (usec): min=9371, max=48025, avg=15038.02, stdev=4549.51 00:33:41.197 clat percentiles (usec): 00:33:41.197 | 1.00th=[10290], 5.00th=[11338], 10.00th=[11994], 20.00th=[12780], 00:33:41.197 | 30.00th=[13304], 40.00th=[13698], 50.00th=[13960], 60.00th=[14615], 00:33:41.197 | 70.00th=[15139], 80.00th=[16057], 90.00th=[18744], 95.00th=[19792], 00:33:41.197 | 99.00th=[47449], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:33:41.197 | 99.99th=[47973] 00:33:41.197 write: IOPS=4413, BW=17.2MiB/s (18.1MB/s)(18.0MiB/1044msec); 0 zone resets 00:33:41.197 slat (usec): min=4, max=17855, avg=112.91, stdev=729.84 00:33:41.197 clat (usec): min=7030, max=54754, avg=14710.10, stdev=5105.10 00:33:41.197 lat (usec): min=7656, max=60601, avg=14823.01, stdev=5153.32 00:33:41.197 clat percentiles (usec): 00:33:41.197 | 1.00th=[10028], 5.00th=[12518], 10.00th=[12649], 20.00th=[13042], 00:33:41.197 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13698], 60.00th=[14091], 00:33:41.197 | 70.00th=[14615], 80.00th=[15401], 90.00th=[16057], 95.00th=[17171], 00:33:41.197 | 99.00th=[54264], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:33:41.197 | 99.99th=[54789] 00:33:41.197 bw ( KiB/s): min=17264, max=19040, per=26.61%, avg=18152.00, stdev=1255.82, samples=2 00:33:41.197 iops : min= 4316, max= 4760, avg=4538.00, stdev=313.96, samples=2 00:33:41.197 lat (msec) : 10=0.78%, 20=96.14%, 50=2.36%, 100=0.72% 00:33:41.197 cpu : usr=3.64%, sys=8.05%, ctx=309, majf=0, minf=1 00:33:41.197 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:33:41.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:41.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:41.197 issued rwts: total=4154,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:41.197 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:41.197 00:33:41.197 Run status group 0 (all jobs): 00:33:41.197 READ: bw=60.7MiB/s (63.6MB/s), 12.9MiB/s-19.9MiB/s (13.5MB/s-20.9MB/s), io=63.4MiB (66.4MB), run=1005-1044msec 00:33:41.197 WRITE: bw=66.6MiB/s (69.9MB/s), 13.9MiB/s-21.4MiB/s (14.5MB/s-22.5MB/s), io=69.6MiB (72.9MB), run=1005-1044msec 00:33:41.197 00:33:41.197 Disk stats (read/write): 00:33:41.197 nvme0n1: ios=4282/4608, merge=0/0, ticks=49601/49749, in_queue=99350, util=91.68% 00:33:41.197 nvme0n2: ios=3119/3582, merge=0/0, ticks=51609/51881, in_queue=103490, util=94.21% 00:33:41.197 nvme0n3: ios=2713/3072, merge=0/0, ticks=50461/44735, in_queue=95196, util=99.58% 00:33:41.197 nvme0n4: ios=3624/3839, merge=0/0, ticks=26011/25270, in_queue=51281, util=97.80% 00:33:41.197 06:44:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:33:41.197 [global] 00:33:41.197 thread=1 00:33:41.197 invalidate=1 00:33:41.197 rw=randwrite 00:33:41.197 time_based=1 00:33:41.198 runtime=1 00:33:41.198 ioengine=libaio 00:33:41.198 direct=1 00:33:41.198 bs=4096 00:33:41.198 iodepth=128 00:33:41.198 norandommap=0 00:33:41.198 numjobs=1 00:33:41.198 00:33:41.198 verify_dump=1 00:33:41.198 verify_backlog=512 00:33:41.198 verify_state_save=0 00:33:41.198 do_verify=1 00:33:41.198 verify=crc32c-intel 00:33:41.198 [job0] 00:33:41.198 filename=/dev/nvme0n1 00:33:41.198 [job1] 00:33:41.198 filename=/dev/nvme0n2 00:33:41.198 [job2] 00:33:41.198 filename=/dev/nvme0n3 00:33:41.198 [job3] 00:33:41.198 filename=/dev/nvme0n4 00:33:41.198 Could not set queue depth (nvme0n1) 00:33:41.198 Could not set queue depth (nvme0n2) 00:33:41.198 Could not set queue depth (nvme0n3) 00:33:41.198 Could not set queue depth (nvme0n4) 00:33:41.198 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:41.198 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:41.198 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:41.198 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:33:41.198 fio-3.35 00:33:41.198 Starting 4 threads 00:33:42.572 00:33:42.572 job0: (groupid=0, jobs=1): err= 0: pid=2252136: Wed Nov 20 06:44:14 2024 00:33:42.572 read: IOPS=1531, BW=6124KiB/s (6271kB/s)(6204KiB/1013msec) 00:33:42.572 slat (usec): min=3, max=14881, avg=183.78, stdev=1095.93 00:33:42.572 clat (usec): min=9942, max=54823, avg=23486.21, stdev=11032.27 00:33:42.572 lat (usec): min=9962, max=54859, avg=23669.99, stdev=11119.21 00:33:42.572 clat percentiles (usec): 00:33:42.572 | 1.00th=[10945], 5.00th=[11469], 10.00th=[13698], 20.00th=[13829], 00:33:42.572 | 30.00th=[14746], 40.00th=[16712], 50.00th=[21365], 60.00th=[21627], 00:33:42.572 | 70.00th=[27657], 80.00th=[36439], 90.00th=[42206], 95.00th=[44827], 00:33:42.572 | 99.00th=[48497], 99.50th=[48497], 99.90th=[51119], 99.95th=[54789], 00:33:42.572 | 99.99th=[54789] 00:33:42.572 write: IOPS=2021, BW=8087KiB/s (8281kB/s)(8192KiB/1013msec); 0 zone resets 00:33:42.572 slat (usec): min=4, max=18471, avg=337.11, stdev=1618.89 00:33:42.572 clat (msec): min=10, max=139, avg=44.51, stdev=30.88 00:33:42.572 lat (msec): min=10, max=140, avg=44.85, stdev=31.08 00:33:42.572 clat percentiles (msec): 00:33:42.572 | 1.00th=[ 11], 5.00th=[ 13], 10.00th=[ 20], 20.00th=[ 23], 00:33:42.572 | 30.00th=[ 26], 40.00th=[ 28], 50.00th=[ 31], 60.00th=[ 38], 00:33:42.572 | 70.00th=[ 51], 80.00th=[ 68], 90.00th=[ 95], 95.00th=[ 114], 00:33:42.572 | 99.00th=[ 138], 99.50th=[ 140], 99.90th=[ 140], 99.95th=[ 140], 00:33:42.572 | 99.99th=[ 140] 00:33:42.572 bw ( KiB/s): min= 6656, max= 8824, per=14.15%, avg=7740.00, stdev=1533.01, samples=2 00:33:42.572 iops : min= 1664, max= 2206, avg=1935.00, stdev=383.25, samples=2 00:33:42.572 lat (msec) : 10=0.33%, 20=25.51%, 50=56.82%, 100=12.61%, 250=4.72% 00:33:42.572 cpu : usr=3.16%, sys=4.55%, ctx=188, majf=0, minf=1 00:33:42.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.9%, >=64=98.2% 00:33:42.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:42.572 issued rwts: total=1551,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:42.572 job1: (groupid=0, jobs=1): err= 0: pid=2252137: Wed Nov 20 06:44:14 2024 00:33:42.572 read: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec) 00:33:42.572 slat (usec): min=2, max=14153, avg=124.46, stdev=921.08 00:33:42.572 clat (usec): min=582, max=52529, avg=16480.89, stdev=8505.49 00:33:42.572 lat (usec): min=598, max=52545, avg=16605.35, stdev=8574.17 00:33:42.572 clat percentiles (usec): 00:33:42.572 | 1.00th=[ 2933], 5.00th=[ 6128], 10.00th=[ 8455], 20.00th=[11207], 00:33:42.572 | 30.00th=[11994], 40.00th=[13304], 50.00th=[15401], 60.00th=[15795], 00:33:42.572 | 70.00th=[18220], 80.00th=[19268], 90.00th=[26084], 95.00th=[35390], 00:33:42.572 | 99.00th=[45876], 99.50th=[47449], 99.90th=[52691], 99.95th=[52691], 00:33:42.572 | 99.99th=[52691] 00:33:42.572 write: IOPS=3472, BW=13.6MiB/s (14.2MB/s)(13.7MiB/1009msec); 0 zone resets 00:33:42.572 slat (usec): min=3, max=14346, avg=155.77, stdev=915.58 00:33:42.572 clat (usec): min=1620, max=82928, avg=22049.52, stdev=19557.06 00:33:42.572 lat (usec): min=1626, max=82947, avg=22205.29, stdev=19691.75 00:33:42.572 clat percentiles (usec): 00:33:42.572 | 1.00th=[ 2933], 5.00th=[ 4686], 10.00th=[ 5145], 20.00th=[ 8291], 00:33:42.572 | 30.00th=[10814], 40.00th=[12125], 50.00th=[13042], 60.00th=[18220], 00:33:42.572 | 70.00th=[20841], 80.00th=[33817], 90.00th=[57934], 95.00th=[67634], 00:33:42.572 | 99.00th=[78119], 99.50th=[81265], 99.90th=[83362], 99.95th=[83362], 00:33:42.572 | 99.99th=[83362] 00:33:42.572 bw ( KiB/s): min=11560, max=15960, per=25.15%, avg=13760.00, stdev=3111.27, samples=2 00:33:42.572 iops : min= 2890, max= 3990, avg=3440.00, stdev=777.82, samples=2 00:33:42.572 lat (usec) : 750=0.24%, 1000=0.06% 00:33:42.572 lat (msec) : 2=0.12%, 4=1.02%, 10=19.92%, 20=52.66%, 50=18.78% 00:33:42.572 lat (msec) : 100=7.19% 00:33:42.572 cpu : usr=3.37%, sys=7.04%, ctx=277, majf=0, minf=1 00:33:42.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:33:42.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:42.572 issued rwts: total=3072,3504,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:42.572 job2: (groupid=0, jobs=1): err= 0: pid=2252140: Wed Nov 20 06:44:14 2024 00:33:42.572 read: IOPS=5054, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1013msec) 00:33:42.572 slat (usec): min=3, max=16324, avg=103.55, stdev=791.30 00:33:42.572 clat (usec): min=3709, max=66367, avg=13490.80, stdev=6051.81 00:33:42.572 lat (usec): min=3727, max=66373, avg=13594.35, stdev=6106.45 00:33:42.572 clat percentiles (usec): 00:33:42.572 | 1.00th=[ 6259], 5.00th=[ 8291], 10.00th=[ 9241], 20.00th=[ 9896], 00:33:42.572 | 30.00th=[10290], 40.00th=[10814], 50.00th=[11338], 60.00th=[12780], 00:33:42.572 | 70.00th=[14222], 80.00th=[16581], 90.00th=[19530], 95.00th=[22676], 00:33:42.572 | 99.00th=[34341], 99.50th=[43254], 99.90th=[66323], 99.95th=[66323], 00:33:42.572 | 99.99th=[66323] 00:33:42.572 write: IOPS=5163, BW=20.2MiB/s (21.2MB/s)(20.4MiB/1013msec); 0 zone resets 00:33:42.572 slat (usec): min=3, max=11613, avg=78.35, stdev=481.95 00:33:42.572 clat (usec): min=2917, max=54846, avg=11371.91, stdev=5905.90 00:33:42.572 lat (usec): min=2933, max=54856, avg=11450.26, stdev=5939.04 00:33:42.572 clat percentiles (usec): 00:33:42.572 | 1.00th=[ 3458], 5.00th=[ 5473], 10.00th=[ 6521], 20.00th=[ 8717], 00:33:42.572 | 30.00th=[10159], 40.00th=[10814], 50.00th=[11207], 60.00th=[11469], 00:33:42.572 | 70.00th=[11600], 80.00th=[12649], 90.00th=[13829], 95.00th=[15795], 00:33:42.572 | 99.00th=[54264], 99.50th=[54789], 99.90th=[54789], 99.95th=[54789], 00:33:42.572 | 99.99th=[54789] 00:33:42.572 bw ( KiB/s): min=17488, max=23519, per=37.48%, avg=20503.50, stdev=4264.56, samples=2 00:33:42.572 iops : min= 4372, max= 5879, avg=5125.50, stdev=1065.61, samples=2 00:33:42.572 lat (msec) : 4=1.10%, 10=22.98%, 20=69.90%, 50=5.27%, 100=0.75% 00:33:42.572 cpu : usr=7.31%, sys=11.66%, ctx=484, majf=0, minf=2 00:33:42.572 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:33:42.572 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.572 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:42.572 issued rwts: total=5120,5231,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.572 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:42.572 job3: (groupid=0, jobs=1): err= 0: pid=2252141: Wed Nov 20 06:44:14 2024 00:33:42.572 read: IOPS=2603, BW=10.2MiB/s (10.7MB/s)(10.2MiB/1008msec) 00:33:42.572 slat (usec): min=2, max=29062, avg=159.61, stdev=1337.66 00:33:42.572 clat (usec): min=3416, max=59949, avg=22074.36, stdev=12358.69 00:33:42.572 lat (usec): min=3425, max=59958, avg=22233.97, stdev=12452.05 00:33:42.572 clat percentiles (usec): 00:33:42.572 | 1.00th=[ 6259], 5.00th=[ 8225], 10.00th=[10028], 20.00th=[11600], 00:33:42.572 | 30.00th=[12125], 40.00th=[16057], 50.00th=[19268], 60.00th=[21890], 00:33:42.572 | 70.00th=[25035], 80.00th=[32637], 90.00th=[42206], 95.00th=[49021], 00:33:42.572 | 99.00th=[60031], 99.50th=[60031], 99.90th=[60031], 99.95th=[60031], 00:33:42.572 | 99.99th=[60031] 00:33:42.572 write: IOPS=3047, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1008msec); 0 zone resets 00:33:42.572 slat (usec): min=3, max=22643, avg=168.08, stdev=1189.28 00:33:42.572 clat (usec): min=1324, max=55020, avg=22479.91, stdev=10368.47 00:33:42.573 lat (usec): min=1336, max=55034, avg=22648.00, stdev=10473.21 00:33:42.573 clat percentiles (usec): 00:33:42.573 | 1.00th=[ 3523], 5.00th=[ 9765], 10.00th=[11207], 20.00th=[14091], 00:33:42.573 | 30.00th=[14746], 40.00th=[17957], 50.00th=[19530], 60.00th=[22152], 00:33:42.573 | 70.00th=[26608], 80.00th=[30802], 90.00th=[38011], 95.00th=[41681], 00:33:42.573 | 99.00th=[51643], 99.50th=[52167], 99.90th=[54789], 99.95th=[54789], 00:33:42.573 | 99.99th=[54789] 00:33:42.573 bw ( KiB/s): min=11784, max=12288, per=22.00%, avg=12036.00, stdev=356.38, samples=2 00:33:42.573 iops : min= 2946, max= 3072, avg=3009.00, stdev=89.10, samples=2 00:33:42.573 lat (msec) : 2=0.18%, 4=0.47%, 10=6.36%, 20=44.54%, 50=45.65% 00:33:42.573 lat (msec) : 100=2.81% 00:33:42.573 cpu : usr=2.09%, sys=4.67%, ctx=206, majf=0, minf=1 00:33:42.573 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:33:42.573 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:42.573 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:33:42.573 issued rwts: total=2624,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:42.573 latency : target=0, window=0, percentile=100.00%, depth=128 00:33:42.573 00:33:42.573 Run status group 0 (all jobs): 00:33:42.573 READ: bw=47.7MiB/s (50.0MB/s), 6124KiB/s-19.7MiB/s (6271kB/s-20.7MB/s), io=48.3MiB (50.7MB), run=1008-1013msec 00:33:42.573 WRITE: bw=53.4MiB/s (56.0MB/s), 8087KiB/s-20.2MiB/s (8281kB/s-21.2MB/s), io=54.1MiB (56.8MB), run=1008-1013msec 00:33:42.573 00:33:42.573 Disk stats (read/write): 00:33:42.573 nvme0n1: ios=1562/1807, merge=0/0, ticks=18382/32195, in_queue=50577, util=93.89% 00:33:42.573 nvme0n2: ios=2101/2560, merge=0/0, ticks=36116/68582, in_queue=104698, util=98.27% 00:33:42.573 nvme0n3: ios=4641/4895, merge=0/0, ticks=51951/49434, in_queue=101385, util=99.58% 00:33:42.573 nvme0n4: ios=2098/2469, merge=0/0, ticks=29369/29962, in_queue=59331, util=99.47% 00:33:42.573 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:33:42.573 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:33:42.573 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2252277 00:33:42.573 06:44:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:33:42.573 [global] 00:33:42.573 thread=1 00:33:42.573 invalidate=1 00:33:42.573 rw=read 00:33:42.573 time_based=1 00:33:42.573 runtime=10 00:33:42.573 ioengine=libaio 00:33:42.573 direct=1 00:33:42.573 bs=4096 00:33:42.573 iodepth=1 00:33:42.573 norandommap=1 00:33:42.573 numjobs=1 00:33:42.573 00:33:42.573 [job0] 00:33:42.573 filename=/dev/nvme0n1 00:33:42.573 [job1] 00:33:42.573 filename=/dev/nvme0n2 00:33:42.573 [job2] 00:33:42.573 filename=/dev/nvme0n3 00:33:42.573 [job3] 00:33:42.573 filename=/dev/nvme0n4 00:33:42.573 Could not set queue depth (nvme0n1) 00:33:42.573 Could not set queue depth (nvme0n2) 00:33:42.573 Could not set queue depth (nvme0n3) 00:33:42.573 Could not set queue depth (nvme0n4) 00:33:42.830 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:42.830 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:42.830 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:42.830 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:33:42.830 fio-3.35 00:33:42.830 Starting 4 threads 00:33:46.108 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:33:46.108 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:33:46.108 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=13615104, buflen=4096 00:33:46.108 fio: pid=2252369, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:46.108 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:46.108 06:44:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:33:46.108 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=675840, buflen=4096 00:33:46.108 fio: pid=2252368, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:46.367 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=52875264, buflen=4096 00:33:46.367 fio: pid=2252366, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:46.367 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:46.367 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:33:46.626 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:46.626 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:33:46.626 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=48214016, buflen=4096 00:33:46.626 fio: pid=2252367, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:33:46.626 00:33:46.626 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2252366: Wed Nov 20 06:44:18 2024 00:33:46.626 read: IOPS=3680, BW=14.4MiB/s (15.1MB/s)(50.4MiB/3508msec) 00:33:46.626 slat (usec): min=3, max=12658, avg=10.95, stdev=186.13 00:33:46.626 clat (usec): min=170, max=40830, avg=256.91, stdev=506.14 00:33:46.626 lat (usec): min=175, max=40838, avg=267.86, stdev=540.09 00:33:46.626 clat percentiles (usec): 00:33:46.626 | 1.00th=[ 190], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 212], 00:33:46.626 | 30.00th=[ 223], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 249], 00:33:46.626 | 70.00th=[ 258], 80.00th=[ 281], 90.00th=[ 306], 95.00th=[ 330], 00:33:46.626 | 99.00th=[ 469], 99.50th=[ 529], 99.90th=[ 586], 99.95th=[ 635], 00:33:46.626 | 99.99th=[40633] 00:33:46.626 bw ( KiB/s): min=11096, max=16776, per=49.32%, avg=14570.67, stdev=1972.98, samples=6 00:33:46.626 iops : min= 2774, max= 4194, avg=3642.67, stdev=493.25, samples=6 00:33:46.626 lat (usec) : 250=61.60%, 500=37.62%, 750=0.73%, 1000=0.01% 00:33:46.626 lat (msec) : 2=0.01%, 4=0.01%, 50=0.02% 00:33:46.626 cpu : usr=1.08%, sys=4.62%, ctx=12914, majf=0, minf=2 00:33:46.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.626 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.626 issued rwts: total=12910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:46.626 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2252367: Wed Nov 20 06:44:18 2024 00:33:46.626 read: IOPS=3086, BW=12.1MiB/s (12.6MB/s)(46.0MiB/3814msec) 00:33:46.626 slat (usec): min=4, max=25812, avg=16.53, stdev=355.23 00:33:46.626 clat (usec): min=194, max=52705, avg=304.06, stdev=1302.82 00:33:46.626 lat (usec): min=200, max=54967, avg=320.59, stdev=1384.04 00:33:46.626 clat percentiles (usec): 00:33:46.626 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 241], 00:33:46.626 | 30.00th=[ 247], 40.00th=[ 251], 50.00th=[ 255], 60.00th=[ 262], 00:33:46.626 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 318], 00:33:46.626 | 99.00th=[ 420], 99.50th=[ 506], 99.90th=[ 2868], 99.95th=[42206], 00:33:46.626 | 99.99th=[42206] 00:33:46.626 bw ( KiB/s): min= 7993, max=15592, per=44.72%, avg=13212.71, stdev=2703.34, samples=7 00:33:46.626 iops : min= 1998, max= 3898, avg=3303.14, stdev=675.92, samples=7 00:33:46.626 lat (usec) : 250=38.20%, 500=61.27%, 750=0.36%, 1000=0.04% 00:33:46.626 lat (msec) : 2=0.02%, 4=0.01%, 50=0.08%, 100=0.01% 00:33:46.626 cpu : usr=1.70%, sys=4.67%, ctx=11779, majf=0, minf=1 00:33:46.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.626 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.626 issued rwts: total=11772,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:46.626 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2252368: Wed Nov 20 06:44:18 2024 00:33:46.626 read: IOPS=51, BW=205KiB/s (210kB/s)(660KiB/3217msec) 00:33:46.626 slat (nsec): min=5290, max=33933, avg=14517.76, stdev=6696.86 00:33:46.626 clat (usec): min=226, max=42173, avg=19335.53, stdev=20691.86 00:33:46.626 lat (usec): min=238, max=42197, avg=19350.05, stdev=20695.76 00:33:46.626 clat percentiles (usec): 00:33:46.626 | 1.00th=[ 241], 5.00th=[ 260], 10.00th=[ 262], 20.00th=[ 269], 00:33:46.626 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 302], 60.00th=[41157], 00:33:46.626 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:46.626 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:33:46.626 | 99.99th=[42206] 00:33:46.626 bw ( KiB/s): min= 96, max= 792, per=0.72%, avg=213.33, stdev=283.51, samples=6 00:33:46.626 iops : min= 24, max= 198, avg=53.33, stdev=70.88, samples=6 00:33:46.626 lat (usec) : 250=3.01%, 500=50.60% 00:33:46.626 lat (msec) : 50=45.78% 00:33:46.626 cpu : usr=0.00%, sys=0.12%, ctx=166, majf=0, minf=2 00:33:46.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.626 complete : 0=0.6%, 4=99.4%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.626 issued rwts: total=166,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:46.626 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2252369: Wed Nov 20 06:44:18 2024 00:33:46.626 read: IOPS=1135, BW=4543KiB/s (4652kB/s)(13.0MiB/2927msec) 00:33:46.626 slat (nsec): min=4223, max=36406, avg=8782.88, stdev=4726.49 00:33:46.626 clat (usec): min=204, max=41192, avg=862.46, stdev=4949.87 00:33:46.626 lat (usec): min=214, max=41207, avg=871.24, stdev=4950.63 00:33:46.626 clat percentiles (usec): 00:33:46.626 | 1.00th=[ 215], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 229], 00:33:46.626 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:33:46.626 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 269], 95.00th=[ 412], 00:33:46.626 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:33:46.626 | 99.99th=[41157] 00:33:46.626 bw ( KiB/s): min= 96, max=13792, per=15.84%, avg=4681.60, stdev=5686.39, samples=5 00:33:46.626 iops : min= 24, max= 3448, avg=1170.40, stdev=1421.60, samples=5 00:33:46.626 lat (usec) : 250=72.09%, 500=24.87%, 750=1.47% 00:33:46.626 lat (msec) : 2=0.03%, 50=1.50% 00:33:46.626 cpu : usr=0.44%, sys=1.47%, ctx=3325, majf=0, minf=2 00:33:46.626 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:46.626 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.626 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:46.626 issued rwts: total=3325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:46.626 latency : target=0, window=0, percentile=100.00%, depth=1 00:33:46.626 00:33:46.626 Run status group 0 (all jobs): 00:33:46.626 READ: bw=28.8MiB/s (30.3MB/s), 205KiB/s-14.4MiB/s (210kB/s-15.1MB/s), io=110MiB (115MB), run=2927-3814msec 00:33:46.626 00:33:46.626 Disk stats (read/write): 00:33:46.626 nvme0n1: ios=12297/0, merge=0/0, ticks=3145/0, in_queue=3145, util=94.91% 00:33:46.626 nvme0n2: ios=11804/0, merge=0/0, ticks=4237/0, in_queue=4237, util=97.94% 00:33:46.626 nvme0n3: ios=162/0, merge=0/0, ticks=3067/0, in_queue=3067, util=96.79% 00:33:46.626 nvme0n4: ios=3137/0, merge=0/0, ticks=2806/0, in_queue=2806, util=96.75% 00:33:46.885 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:46.885 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:33:47.451 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:47.451 06:44:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:33:47.451 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:47.451 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:33:47.709 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:33:47.709 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2252277 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:33:48.274 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1221 -- # local i=0 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1233 -- # return 0 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:33:48.274 nvmf hotplug test: fio failed as expected 00:33:48.274 06:44:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:48.532 rmmod nvme_tcp 00:33:48.532 rmmod nvme_fabrics 00:33:48.532 rmmod nvme_keyring 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 2250259 ']' 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 2250259 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@952 -- # '[' -z 2250259 ']' 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@956 -- # kill -0 2250259 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # uname 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:48.532 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2250259 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2250259' 00:33:48.791 killing process with pid 2250259 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@971 -- # kill 2250259 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@976 -- # wait 2250259 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:48.791 06:44:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:51.326 00:33:51.326 real 0m24.141s 00:33:51.326 user 1m7.951s 00:33:51.326 sys 0m10.506s 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:33:51.326 ************************************ 00:33:51.326 END TEST nvmf_fio_target 00:33:51.326 ************************************ 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:51.326 ************************************ 00:33:51.326 START TEST nvmf_bdevio 00:33:51.326 ************************************ 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:33:51.326 * Looking for test storage... 00:33:51.326 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lcov --version 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:51.326 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:51.326 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.326 --rc genhtml_branch_coverage=1 00:33:51.326 --rc genhtml_function_coverage=1 00:33:51.327 --rc genhtml_legend=1 00:33:51.327 --rc geninfo_all_blocks=1 00:33:51.327 --rc geninfo_unexecuted_blocks=1 00:33:51.327 00:33:51.327 ' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:51.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.327 --rc genhtml_branch_coverage=1 00:33:51.327 --rc genhtml_function_coverage=1 00:33:51.327 --rc genhtml_legend=1 00:33:51.327 --rc geninfo_all_blocks=1 00:33:51.327 --rc geninfo_unexecuted_blocks=1 00:33:51.327 00:33:51.327 ' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:51.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.327 --rc genhtml_branch_coverage=1 00:33:51.327 --rc genhtml_function_coverage=1 00:33:51.327 --rc genhtml_legend=1 00:33:51.327 --rc geninfo_all_blocks=1 00:33:51.327 --rc geninfo_unexecuted_blocks=1 00:33:51.327 00:33:51.327 ' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:51.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:51.327 --rc genhtml_branch_coverage=1 00:33:51.327 --rc genhtml_function_coverage=1 00:33:51.327 --rc genhtml_legend=1 00:33:51.327 --rc geninfo_all_blocks=1 00:33:51.327 --rc geninfo_unexecuted_blocks=1 00:33:51.327 00:33:51.327 ' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:33:51.327 06:44:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:53.231 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:53.231 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.231 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:53.232 Found net devices under 0000:09:00.0: cvl_0_0 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:53.232 Found net devices under 0000:09:00.1: cvl_0_1 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:53.232 06:44:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:53.232 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:53.232 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:53.232 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:53.232 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:53.232 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:53.232 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:53.232 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:53.490 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:53.490 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:33:53.490 00:33:53.490 --- 10.0.0.2 ping statistics --- 00:33:53.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.490 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:53.490 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:53.490 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.072 ms 00:33:53.490 00:33:53.490 --- 10.0.0.1 ping statistics --- 00:33:53.490 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:53.490 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=2254994 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 2254994 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@833 -- # '[' -z 2254994 ']' 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:53.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:53.490 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:53.491 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:53.491 [2024-11-20 06:44:25.149333] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:53.491 [2024-11-20 06:44:25.150367] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:33:53.491 [2024-11-20 06:44:25.150429] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:53.491 [2024-11-20 06:44:25.219503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:53.491 [2024-11-20 06:44:25.276486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:53.491 [2024-11-20 06:44:25.276534] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:53.491 [2024-11-20 06:44:25.276563] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:53.491 [2024-11-20 06:44:25.276573] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:53.491 [2024-11-20 06:44:25.276583] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:53.491 [2024-11-20 06:44:25.278092] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:53.491 [2024-11-20 06:44:25.278154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:53.491 [2024-11-20 06:44:25.278221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:53.491 [2024-11-20 06:44:25.278224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:53.749 [2024-11-20 06:44:25.371089] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:53.749 [2024-11-20 06:44:25.371177] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:53.749 [2024-11-20 06:44:25.371419] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:53.749 [2024-11-20 06:44:25.372013] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:53.749 [2024-11-20 06:44:25.372244] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:53.749 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@866 -- # return 0 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:53.750 [2024-11-20 06:44:25.422963] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:53.750 Malloc0 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:53.750 [2024-11-20 06:44:25.487131] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:33:53.750 { 00:33:53.750 "params": { 00:33:53.750 "name": "Nvme$subsystem", 00:33:53.750 "trtype": "$TEST_TRANSPORT", 00:33:53.750 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.750 "adrfam": "ipv4", 00:33:53.750 "trsvcid": "$NVMF_PORT", 00:33:53.750 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.750 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.750 "hdgst": ${hdgst:-false}, 00:33:53.750 "ddgst": ${ddgst:-false} 00:33:53.750 }, 00:33:53.750 "method": "bdev_nvme_attach_controller" 00:33:53.750 } 00:33:53.750 EOF 00:33:53.750 )") 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:33:53.750 06:44:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:33:53.750 "params": { 00:33:53.750 "name": "Nvme1", 00:33:53.750 "trtype": "tcp", 00:33:53.750 "traddr": "10.0.0.2", 00:33:53.750 "adrfam": "ipv4", 00:33:53.750 "trsvcid": "4420", 00:33:53.750 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.750 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.750 "hdgst": false, 00:33:53.750 "ddgst": false 00:33:53.750 }, 00:33:53.750 "method": "bdev_nvme_attach_controller" 00:33:53.750 }' 00:33:53.750 [2024-11-20 06:44:25.534346] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:33:53.750 [2024-11-20 06:44:25.534425] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2255133 ] 00:33:54.008 [2024-11-20 06:44:25.603565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:33:54.008 [2024-11-20 06:44:25.667153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.008 [2024-11-20 06:44:25.667206] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:54.008 [2024-11-20 06:44:25.667210] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.266 I/O targets: 00:33:54.266 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:33:54.266 00:33:54.266 00:33:54.266 CUnit - A unit testing framework for C - Version 2.1-3 00:33:54.266 http://cunit.sourceforge.net/ 00:33:54.266 00:33:54.266 00:33:54.266 Suite: bdevio tests on: Nvme1n1 00:33:54.266 Test: blockdev write read block ...passed 00:33:54.266 Test: blockdev write zeroes read block ...passed 00:33:54.266 Test: blockdev write zeroes read no split ...passed 00:33:54.266 Test: blockdev write zeroes read split ...passed 00:33:54.266 Test: blockdev write zeroes read split partial ...passed 00:33:54.266 Test: blockdev reset ...[2024-11-20 06:44:25.955334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:33:54.266 [2024-11-20 06:44:25.955444] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16c7640 (9): Bad file descriptor 00:33:54.266 [2024-11-20 06:44:26.048300] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:33:54.266 passed 00:33:54.266 Test: blockdev write read 8 blocks ...passed 00:33:54.266 Test: blockdev write read size > 128k ...passed 00:33:54.266 Test: blockdev write read invalid size ...passed 00:33:54.266 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:33:54.266 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:33:54.266 Test: blockdev write read max offset ...passed 00:33:54.525 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:33:54.525 Test: blockdev writev readv 8 blocks ...passed 00:33:54.525 Test: blockdev writev readv 30 x 1block ...passed 00:33:54.525 Test: blockdev writev readv block ...passed 00:33:54.525 Test: blockdev writev readv size > 128k ...passed 00:33:54.525 Test: blockdev writev readv size > 128k in two iovs ...passed 00:33:54.525 Test: blockdev comparev and writev ...[2024-11-20 06:44:26.262828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:54.525 [2024-11-20 06:44:26.262864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:54.525 [2024-11-20 06:44:26.262888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:54.525 [2024-11-20 06:44:26.262906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:33:54.525 [2024-11-20 06:44:26.263328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:54.525 [2024-11-20 06:44:26.263353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:33:54.525 [2024-11-20 06:44:26.263375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:54.525 [2024-11-20 06:44:26.263391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:33:54.525 [2024-11-20 06:44:26.263790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:54.525 [2024-11-20 06:44:26.263814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:33:54.525 [2024-11-20 06:44:26.263836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:54.525 [2024-11-20 06:44:26.263852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:33:54.525 [2024-11-20 06:44:26.264247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:54.525 [2024-11-20 06:44:26.264270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:33:54.525 [2024-11-20 06:44:26.264292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:33:54.525 [2024-11-20 06:44:26.264315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:33:54.525 passed 00:33:54.525 Test: blockdev nvme passthru rw ...passed 00:33:54.525 Test: blockdev nvme passthru vendor specific ...[2024-11-20 06:44:26.346587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:54.525 [2024-11-20 06:44:26.346614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:33:54.525 [2024-11-20 06:44:26.346777] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:54.525 [2024-11-20 06:44:26.346801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:33:54.525 [2024-11-20 06:44:26.346954] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:54.525 [2024-11-20 06:44:26.346977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:33:54.525 [2024-11-20 06:44:26.347129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:33:54.525 [2024-11-20 06:44:26.347152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:33:54.525 passed 00:33:54.783 Test: blockdev nvme admin passthru ...passed 00:33:54.783 Test: blockdev copy ...passed 00:33:54.783 00:33:54.783 Run Summary: Type Total Ran Passed Failed Inactive 00:33:54.783 suites 1 1 n/a 0 0 00:33:54.783 tests 23 23 23 0 0 00:33:54.783 asserts 152 152 152 0 n/a 00:33:54.783 00:33:54.783 Elapsed time = 1.116 seconds 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:54.783 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:55.042 rmmod nvme_tcp 00:33:55.042 rmmod nvme_fabrics 00:33:55.042 rmmod nvme_keyring 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 2254994 ']' 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 2254994 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@952 -- # '[' -z 2254994 ']' 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@956 -- # kill -0 2254994 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # uname 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2254994 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # process_name=reactor_3 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@962 -- # '[' reactor_3 = sudo ']' 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2254994' 00:33:55.042 killing process with pid 2254994 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@971 -- # kill 2254994 00:33:55.042 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@976 -- # wait 2254994 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:55.300 06:44:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.204 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:57.204 00:33:57.204 real 0m6.307s 00:33:57.204 user 0m7.999s 00:33:57.204 sys 0m2.484s 00:33:57.204 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:57.204 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:33:57.204 ************************************ 00:33:57.204 END TEST nvmf_bdevio 00:33:57.204 ************************************ 00:33:57.204 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:33:57.204 00:33:57.204 real 3m56.081s 00:33:57.204 user 8m56.548s 00:33:57.204 sys 1m24.588s 00:33:57.204 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1128 -- # xtrace_disable 00:33:57.204 06:44:29 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:57.204 ************************************ 00:33:57.204 END TEST nvmf_target_core_interrupt_mode 00:33:57.204 ************************************ 00:33:57.463 06:44:29 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:57.463 06:44:29 nvmf_tcp -- common/autotest_common.sh@1103 -- # '[' 4 -le 1 ']' 00:33:57.463 06:44:29 nvmf_tcp -- common/autotest_common.sh@1109 -- # xtrace_disable 00:33:57.463 06:44:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:57.463 ************************************ 00:33:57.463 START TEST nvmf_interrupt 00:33:57.463 ************************************ 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:33:57.463 * Looking for test storage... 00:33:57.463 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lcov --version 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:33:57.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.463 --rc genhtml_branch_coverage=1 00:33:57.463 --rc genhtml_function_coverage=1 00:33:57.463 --rc genhtml_legend=1 00:33:57.463 --rc geninfo_all_blocks=1 00:33:57.463 --rc geninfo_unexecuted_blocks=1 00:33:57.463 00:33:57.463 ' 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:33:57.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.463 --rc genhtml_branch_coverage=1 00:33:57.463 --rc genhtml_function_coverage=1 00:33:57.463 --rc genhtml_legend=1 00:33:57.463 --rc geninfo_all_blocks=1 00:33:57.463 --rc geninfo_unexecuted_blocks=1 00:33:57.463 00:33:57.463 ' 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:33:57.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.463 --rc genhtml_branch_coverage=1 00:33:57.463 --rc genhtml_function_coverage=1 00:33:57.463 --rc genhtml_legend=1 00:33:57.463 --rc geninfo_all_blocks=1 00:33:57.463 --rc geninfo_unexecuted_blocks=1 00:33:57.463 00:33:57.463 ' 00:33:57.463 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:33:57.463 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:57.463 --rc genhtml_branch_coverage=1 00:33:57.463 --rc genhtml_function_coverage=1 00:33:57.464 --rc genhtml_legend=1 00:33:57.464 --rc geninfo_all_blocks=1 00:33:57.464 --rc geninfo_unexecuted_blocks=1 00:33:57.464 00:33:57.464 ' 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:33:57.464 06:44:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:33:59.994 Found 0000:09:00.0 (0x8086 - 0x159b) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:33:59.994 Found 0000:09:00.1 (0x8086 - 0x159b) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:33:59.994 Found net devices under 0000:09:00.0: cvl_0_0 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:33:59.994 Found net devices under 0000:09:00.1: cvl_0_1 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:59.994 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:59.995 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:59.995 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.253 ms 00:33:59.995 00:33:59.995 --- 10.0.0.2 ping statistics --- 00:33:59.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.995 rtt min/avg/max/mdev = 0.253/0.253/0.253/0.000 ms 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:59.995 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:59.995 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:33:59.995 00:33:59.995 --- 10.0.0.1 ping statistics --- 00:33:59.995 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:59.995 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@724 -- # xtrace_disable 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=2257226 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 2257226 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@833 -- # '[' -z 2257226 ']' 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@838 -- # local max_retries=100 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:59.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # xtrace_disable 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:59.995 [2024-11-20 06:44:31.525649] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:59.995 [2024-11-20 06:44:31.526719] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:33:59.995 [2024-11-20 06:44:31.526782] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:59.995 [2024-11-20 06:44:31.596774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:59.995 [2024-11-20 06:44:31.653140] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:59.995 [2024-11-20 06:44:31.653192] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:59.995 [2024-11-20 06:44:31.653220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:59.995 [2024-11-20 06:44:31.653231] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:59.995 [2024-11-20 06:44:31.653240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:59.995 [2024-11-20 06:44:31.654576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:59.995 [2024-11-20 06:44:31.654582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:59.995 [2024-11-20 06:44:31.744618] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:59.995 [2024-11-20 06:44:31.744634] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:59.995 [2024-11-20 06:44:31.744874] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@866 -- # return 0 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@730 -- # xtrace_disable 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:33:59.995 5000+0 records in 00:33:59.995 5000+0 records out 00:33:59.995 10240000 bytes (10 MB, 9.8 MiB) copied, 0.014363 s, 713 MB/s 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:33:59.995 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:00.254 AIO0 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:00.254 [2024-11-20 06:44:31.863168] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:00.254 [2024-11-20 06:44:31.887438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2257226 0 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2257226 0 idle 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2257226 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2257226 -w 256 00:34:00.254 06:44:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2257226 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.26 reactor_0' 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2257226 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.26 reactor_0 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2257226 1 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2257226 1 idle 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2257226 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:00.254 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:00.255 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:00.255 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2257226 -w 256 00:34:00.255 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2257231 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.00 reactor_1' 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2257231 root 20 0 128.2g 47232 34560 S 0.0 0.1 0:00.00 reactor_1 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2257280 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2257226 0 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2257226 0 busy 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2257226 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2257226 -w 256 00:34:00.513 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:00.771 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2257226 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.27 reactor_0' 00:34:00.771 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2257226 root 20 0 128.2g 48000 34944 S 0.0 0.1 0:00.27 reactor_0 00:34:00.771 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:00.771 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:00.771 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:00.771 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:00.771 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:00.771 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:00.771 06:44:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:34:01.704 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:34:01.704 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:01.704 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2257226 -w 256 00:34:01.704 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2257226 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.47 reactor_0' 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2257226 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:02.47 reactor_0 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2257226 1 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2257226 1 busy 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2257226 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:01.963 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2257226 -w 256 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2257231 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:01.26 reactor_1' 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2257231 root 20 0 128.2g 48384 34944 R 99.9 0.1 0:01.26 reactor_1 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:01.964 06:44:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2257280 00:34:11.929 Initializing NVMe Controllers 00:34:11.929 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:11.929 Controller IO queue size 256, less than required. 00:34:11.929 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:34:11.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:34:11.929 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:34:11.929 Initialization complete. Launching workers. 00:34:11.929 ======================================================== 00:34:11.929 Latency(us) 00:34:11.929 Device Information : IOPS MiB/s Average min max 00:34:11.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 13973.30 54.58 18332.51 4720.98 27274.67 00:34:11.929 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 13758.90 53.75 18617.79 4611.19 22777.25 00:34:11.929 ======================================================== 00:34:11.929 Total : 27732.20 108.33 18474.05 4611.19 27274.67 00:34:11.929 00:34:11.929 06:44:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:11.929 06:44:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2257226 0 00:34:11.929 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2257226 0 idle 00:34:11.929 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2257226 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2257226 -w 256 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2257226 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.22 reactor_0' 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2257226 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:20.22 reactor_0 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2257226 1 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2257226 1 idle 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2257226 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2257226 -w 256 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2257231 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1' 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2257231 root 20 0 128.2g 48384 34944 S 0.0 0.1 0:09.98 reactor_1 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:11.930 06:44:42 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:11.930 06:44:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:34:11.930 06:44:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1200 -- # local i=0 00:34:11.930 06:44:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1201 -- # local nvme_device_counter=1 nvme_devices=0 00:34:11.930 06:44:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # [[ -n '' ]] 00:34:11.930 06:44:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1207 -- # sleep 2 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1208 -- # (( i++ <= 15 )) 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # lsblk -l -o NAME,SERIAL 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # grep -c SPDKISFASTANDAWESOME 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # nvme_devices=1 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( nvme_devices == nvme_device_counter )) 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # return 0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2257226 0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2257226 0 idle 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2257226 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2257226 -w 256 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2257226 root 20 0 128.2g 60672 34944 R 0.0 0.1 0:20.32 reactor_0' 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2257226 root 20 0 128.2g 60672 34944 R 0.0 0.1 0:20.32 reactor_0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2257226 1 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2257226 1 idle 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2257226 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2257226 -w 256 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2257231 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1' 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2257231 root 20 0 128.2g 60672 34944 S 0.0 0.1 0:10.01 reactor_1 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:13.833 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1221 -- # local i=0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # lsblk -o NAME,SERIAL 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1222 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # lsblk -l -o NAME,SERIAL 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1229 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1233 -- # return 0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:13.833 rmmod nvme_tcp 00:34:13.833 rmmod nvme_fabrics 00:34:13.833 rmmod nvme_keyring 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 2257226 ']' 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 2257226 00:34:13.833 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@952 -- # '[' -z 2257226 ']' 00:34:13.834 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@956 -- # kill -0 2257226 00:34:13.834 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # uname 00:34:13.834 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:13.834 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2257226 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2257226' 00:34:14.092 killing process with pid 2257226 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@971 -- # kill 2257226 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@976 -- # wait 2257226 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:34:14.092 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:14.353 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:14.353 06:44:45 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:14.353 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:14.353 06:44:45 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:16.257 06:44:47 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:16.257 00:34:16.257 real 0m18.896s 00:34:16.257 user 0m37.777s 00:34:16.257 sys 0m6.252s 00:34:16.257 06:44:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:16.257 06:44:47 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:34:16.257 ************************************ 00:34:16.257 END TEST nvmf_interrupt 00:34:16.257 ************************************ 00:34:16.257 00:34:16.257 real 25m2.398s 00:34:16.257 user 58m23.896s 00:34:16.257 sys 6m41.319s 00:34:16.257 06:44:47 nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:16.257 06:44:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.257 ************************************ 00:34:16.257 END TEST nvmf_tcp 00:34:16.257 ************************************ 00:34:16.257 06:44:48 -- spdk/autotest.sh@281 -- # [[ 0 -eq 0 ]] 00:34:16.257 06:44:48 -- spdk/autotest.sh@282 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:16.257 06:44:48 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:16.257 06:44:48 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:16.257 06:44:48 -- common/autotest_common.sh@10 -- # set +x 00:34:16.257 ************************************ 00:34:16.257 START TEST spdkcli_nvmf_tcp 00:34:16.257 ************************************ 00:34:16.257 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:16.526 * Looking for test storage... 00:34:16.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lcov --version 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:16.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.526 --rc genhtml_branch_coverage=1 00:34:16.526 --rc genhtml_function_coverage=1 00:34:16.526 --rc genhtml_legend=1 00:34:16.526 --rc geninfo_all_blocks=1 00:34:16.526 --rc geninfo_unexecuted_blocks=1 00:34:16.526 00:34:16.526 ' 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:16.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.526 --rc genhtml_branch_coverage=1 00:34:16.526 --rc genhtml_function_coverage=1 00:34:16.526 --rc genhtml_legend=1 00:34:16.526 --rc geninfo_all_blocks=1 00:34:16.526 --rc geninfo_unexecuted_blocks=1 00:34:16.526 00:34:16.526 ' 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:16.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.526 --rc genhtml_branch_coverage=1 00:34:16.526 --rc genhtml_function_coverage=1 00:34:16.526 --rc genhtml_legend=1 00:34:16.526 --rc geninfo_all_blocks=1 00:34:16.526 --rc geninfo_unexecuted_blocks=1 00:34:16.526 00:34:16.526 ' 00:34:16.526 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:16.526 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.526 --rc genhtml_branch_coverage=1 00:34:16.526 --rc genhtml_function_coverage=1 00:34:16.526 --rc genhtml_legend=1 00:34:16.526 --rc geninfo_all_blocks=1 00:34:16.526 --rc geninfo_unexecuted_blocks=1 00:34:16.526 00:34:16.527 ' 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:16.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2259297 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2259297 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # '[' -z 2259297 ']' 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:16.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:16.527 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.527 [2024-11-20 06:44:48.247521] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:34:16.527 [2024-11-20 06:44:48.247611] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2259297 ] 00:34:16.527 [2024-11-20 06:44:48.316362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:16.855 [2024-11-20 06:44:48.382521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:16.855 [2024-11-20 06:44:48.382527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@866 -- # return 0 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:16.855 06:44:48 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:16.855 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:16.855 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:16.855 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:16.855 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:16.855 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:16.855 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:16.855 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:16.855 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:16.855 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:16.855 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:16.855 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:16.855 ' 00:34:19.383 [2024-11-20 06:44:51.157585] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:20.757 [2024-11-20 06:44:52.430034] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:23.283 [2024-11-20 06:44:54.773145] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:25.181 [2024-11-20 06:44:56.791317] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:26.553 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:26.553 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:26.553 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:26.553 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:26.553 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:26.553 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:26.553 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:26.553 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:26.553 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:26.553 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:26.553 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:26.553 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:26.811 06:44:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:26.811 06:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:26.811 06:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.811 06:44:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:26.811 06:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:26.811 06:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:26.811 06:44:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:26.811 06:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:27.069 06:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:27.327 06:44:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:27.327 06:44:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:27.327 06:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:27.327 06:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:27.328 06:44:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:27.328 06:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:27.328 06:44:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:27.328 06:44:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:27.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:27.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:27.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:27.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:27.328 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:27.328 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:27.328 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:27.328 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:27.328 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:27.328 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:27.328 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:27.328 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:27.328 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:27.328 ' 00:34:32.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:32.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:32.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:32.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:32.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:32.590 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:32.590 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:32.590 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:32.590 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:32.590 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:32.590 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:32.590 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:32.590 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:32.590 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2259297 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2259297 ']' 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2259297 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # uname 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2259297 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2259297' 00:34:32.590 killing process with pid 2259297 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@971 -- # kill 2259297 00:34:32.590 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@976 -- # wait 2259297 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2259297 ']' 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2259297 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # '[' -z 2259297 ']' 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # kill -0 2259297 00:34:32.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2259297) - No such process 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@979 -- # echo 'Process with pid 2259297 is not found' 00:34:32.848 Process with pid 2259297 is not found 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:32.848 00:34:32.848 real 0m16.609s 00:34:32.848 user 0m35.448s 00:34:32.848 sys 0m0.700s 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:32.848 06:45:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:32.848 ************************************ 00:34:32.848 END TEST spdkcli_nvmf_tcp 00:34:32.848 ************************************ 00:34:32.848 06:45:04 -- spdk/autotest.sh@283 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:32.848 06:45:04 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:34:32.848 06:45:04 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:32.848 06:45:04 -- common/autotest_common.sh@10 -- # set +x 00:34:33.107 ************************************ 00:34:33.107 START TEST nvmf_identify_passthru 00:34:33.107 ************************************ 00:34:33.107 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:33.107 * Looking for test storage... 00:34:33.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:33.107 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:33.107 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lcov --version 00:34:33.107 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:33.107 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:33.107 06:45:04 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:33.108 06:45:04 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:34:33.108 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:33.108 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:33.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.108 --rc genhtml_branch_coverage=1 00:34:33.108 --rc genhtml_function_coverage=1 00:34:33.108 --rc genhtml_legend=1 00:34:33.108 --rc geninfo_all_blocks=1 00:34:33.108 --rc geninfo_unexecuted_blocks=1 00:34:33.108 00:34:33.108 ' 00:34:33.108 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:33.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.108 --rc genhtml_branch_coverage=1 00:34:33.108 --rc genhtml_function_coverage=1 00:34:33.108 --rc genhtml_legend=1 00:34:33.108 --rc geninfo_all_blocks=1 00:34:33.108 --rc geninfo_unexecuted_blocks=1 00:34:33.108 00:34:33.108 ' 00:34:33.108 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:33.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.108 --rc genhtml_branch_coverage=1 00:34:33.108 --rc genhtml_function_coverage=1 00:34:33.108 --rc genhtml_legend=1 00:34:33.108 --rc geninfo_all_blocks=1 00:34:33.108 --rc geninfo_unexecuted_blocks=1 00:34:33.108 00:34:33.108 ' 00:34:33.108 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:33.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.108 --rc genhtml_branch_coverage=1 00:34:33.108 --rc genhtml_function_coverage=1 00:34:33.108 --rc genhtml_legend=1 00:34:33.108 --rc geninfo_all_blocks=1 00:34:33.108 --rc geninfo_unexecuted_blocks=1 00:34:33.108 00:34:33.108 ' 00:34:33.108 06:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.108 06:45:04 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:33.108 06:45:04 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.108 06:45:04 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.108 06:45:04 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.108 06:45:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.108 06:45:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.108 06:45:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.108 06:45:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:33.108 06:45:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:33.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:33.108 06:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.108 06:45:04 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:34:33.108 06:45:04 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.108 06:45:04 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.108 06:45:04 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.108 06:45:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.108 06:45:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.108 06:45:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.108 06:45:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:33.108 06:45:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.108 06:45:04 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:33.108 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:33.108 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:33.108 06:45:04 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:34:33.108 06:45:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:35.641 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:35.641 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:35.641 Found net devices under 0000:09:00.0: cvl_0_0 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:35.641 Found net devices under 0000:09:00.1: cvl_0_1 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:35.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:35.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:34:35.641 00:34:35.641 --- 10.0.0.2 ping statistics --- 00:34:35.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.641 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:35.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:35.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.098 ms 00:34:35.641 00:34:35.641 --- 10.0.0.1 ping statistics --- 00:34:35.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:35.641 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:35.641 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:35.642 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:35.642 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:35.642 06:45:07 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:35.642 06:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:35.642 06:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # bdfs=() 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1507 -- # local bdfs 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # bdfs=($(get_nvme_bdfs)) 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1508 -- # get_nvme_bdfs 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # bdfs=() 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1496 -- # local bdfs 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1497 -- # jq -r '.config[].params.traddr' 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # (( 1 == 0 )) 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1502 -- # printf '%s\n' 0000:0b:00.0 00:34:35.642 06:45:07 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # echo 0000:0b:00.0 00:34:35.642 06:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:0b:00.0 00:34:35.642 06:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:0b:00.0 ']' 00:34:35.642 06:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:35.642 06:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:35.642 06:45:07 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:39.836 06:45:11 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F4Q1P0FGN 00:34:39.836 06:45:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:0b:00.0' -i 0 00:34:39.836 06:45:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:34:39.836 06:45:11 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:34:44.023 06:45:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:34:44.023 06:45:15 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:34:44.023 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:44.023 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:44.023 06:45:15 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:34:44.023 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:44.023 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:44.023 06:45:15 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2264553 00:34:44.023 06:45:15 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:34:44.023 06:45:15 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:44.023 06:45:15 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2264553 00:34:44.023 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # '[' -z 2264553 ']' 00:34:44.023 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:44.023 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:44.023 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:44.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:44.023 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:44.023 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:44.023 [2024-11-20 06:45:15.642066] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:34:44.023 [2024-11-20 06:45:15.642153] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:44.023 [2024-11-20 06:45:15.713275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:44.023 [2024-11-20 06:45:15.770455] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:44.023 [2024-11-20 06:45:15.770511] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:44.023 [2024-11-20 06:45:15.770524] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:44.023 [2024-11-20 06:45:15.770534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:44.023 [2024-11-20 06:45:15.770543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:44.023 [2024-11-20 06:45:15.772091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:44.023 [2024-11-20 06:45:15.772200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:44.023 [2024-11-20 06:45:15.772316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:44.023 [2024-11-20 06:45:15.772319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:44.282 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:44.282 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@866 -- # return 0 00:34:44.282 06:45:15 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:34:44.282 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.282 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:44.282 INFO: Log level set to 20 00:34:44.282 INFO: Requests: 00:34:44.282 { 00:34:44.282 "jsonrpc": "2.0", 00:34:44.282 "method": "nvmf_set_config", 00:34:44.282 "id": 1, 00:34:44.282 "params": { 00:34:44.282 "admin_cmd_passthru": { 00:34:44.282 "identify_ctrlr": true 00:34:44.282 } 00:34:44.282 } 00:34:44.282 } 00:34:44.282 00:34:44.282 INFO: response: 00:34:44.282 { 00:34:44.282 "jsonrpc": "2.0", 00:34:44.282 "id": 1, 00:34:44.282 "result": true 00:34:44.282 } 00:34:44.282 00:34:44.282 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.282 06:45:15 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:34:44.282 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.282 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:44.282 INFO: Setting log level to 20 00:34:44.282 INFO: Setting log level to 20 00:34:44.282 INFO: Log level set to 20 00:34:44.282 INFO: Log level set to 20 00:34:44.282 INFO: Requests: 00:34:44.282 { 00:34:44.282 "jsonrpc": "2.0", 00:34:44.282 "method": "framework_start_init", 00:34:44.282 "id": 1 00:34:44.282 } 00:34:44.282 00:34:44.282 INFO: Requests: 00:34:44.282 { 00:34:44.282 "jsonrpc": "2.0", 00:34:44.282 "method": "framework_start_init", 00:34:44.282 "id": 1 00:34:44.282 } 00:34:44.282 00:34:44.282 [2024-11-20 06:45:15.988273] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:34:44.282 INFO: response: 00:34:44.282 { 00:34:44.282 "jsonrpc": "2.0", 00:34:44.282 "id": 1, 00:34:44.282 "result": true 00:34:44.282 } 00:34:44.282 00:34:44.282 INFO: response: 00:34:44.282 { 00:34:44.282 "jsonrpc": "2.0", 00:34:44.282 "id": 1, 00:34:44.282 "result": true 00:34:44.282 } 00:34:44.282 00:34:44.282 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.282 06:45:15 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:44.282 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.282 06:45:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:44.282 INFO: Setting log level to 40 00:34:44.282 INFO: Setting log level to 40 00:34:44.282 INFO: Setting log level to 40 00:34:44.282 [2024-11-20 06:45:15.998610] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:44.282 06:45:16 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:44.282 06:45:16 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:34:44.282 06:45:16 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:44.282 06:45:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:44.282 06:45:16 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:0b:00.0 00:34:44.282 06:45:16 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:44.282 06:45:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.561 Nvme0n1 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.561 06:45:18 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.561 06:45:18 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.561 06:45:18 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.561 [2024-11-20 06:45:18.905721] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.561 06:45:18 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.561 [ 00:34:47.561 { 00:34:47.561 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:34:47.561 "subtype": "Discovery", 00:34:47.561 "listen_addresses": [], 00:34:47.561 "allow_any_host": true, 00:34:47.561 "hosts": [] 00:34:47.561 }, 00:34:47.561 { 00:34:47.561 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:34:47.561 "subtype": "NVMe", 00:34:47.561 "listen_addresses": [ 00:34:47.561 { 00:34:47.561 "trtype": "TCP", 00:34:47.561 "adrfam": "IPv4", 00:34:47.561 "traddr": "10.0.0.2", 00:34:47.561 "trsvcid": "4420" 00:34:47.561 } 00:34:47.561 ], 00:34:47.561 "allow_any_host": true, 00:34:47.561 "hosts": [], 00:34:47.561 "serial_number": "SPDK00000000000001", 00:34:47.561 "model_number": "SPDK bdev Controller", 00:34:47.561 "max_namespaces": 1, 00:34:47.561 "min_cntlid": 1, 00:34:47.561 "max_cntlid": 65519, 00:34:47.561 "namespaces": [ 00:34:47.561 { 00:34:47.561 "nsid": 1, 00:34:47.561 "bdev_name": "Nvme0n1", 00:34:47.561 "name": "Nvme0n1", 00:34:47.561 "nguid": "BF847A4C6108496E85DE4AE604A99228", 00:34:47.561 "uuid": "bf847a4c-6108-496e-85de-4ae604a99228" 00:34:47.561 } 00:34:47.561 ] 00:34:47.561 } 00:34:47.561 ] 00:34:47.561 06:45:18 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.561 06:45:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:47.561 06:45:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:34:47.561 06:45:18 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:34:47.561 06:45:19 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F4Q1P0FGN 00:34:47.561 06:45:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:34:47.561 06:45:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:34:47.561 06:45:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:34:47.561 06:45:19 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:34:47.561 06:45:19 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F4Q1P0FGN '!=' BTLJ72430F4Q1P0FGN ']' 00:34:47.561 06:45:19 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:34:47.561 06:45:19 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:47.561 06:45:19 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:34:47.561 06:45:19 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:47.561 rmmod nvme_tcp 00:34:47.561 rmmod nvme_fabrics 00:34:47.561 rmmod nvme_keyring 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 2264553 ']' 00:34:47.561 06:45:19 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 2264553 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # '[' -z 2264553 ']' 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # kill -0 2264553 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # uname 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2264553 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2264553' 00:34:47.561 killing process with pid 2264553 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@971 -- # kill 2264553 00:34:47.561 06:45:19 nvmf_identify_passthru -- common/autotest_common.sh@976 -- # wait 2264553 00:34:49.459 06:45:20 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:49.459 06:45:20 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:49.459 06:45:20 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:49.459 06:45:20 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:34:49.459 06:45:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:34:49.459 06:45:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:49.459 06:45:20 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:34:49.459 06:45:20 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:49.459 06:45:20 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:49.459 06:45:20 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:49.459 06:45:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:49.459 06:45:20 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.376 06:45:22 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:51.376 00:34:51.376 real 0m18.224s 00:34:51.376 user 0m26.140s 00:34:51.376 sys 0m3.302s 00:34:51.376 06:45:22 nvmf_identify_passthru -- common/autotest_common.sh@1128 -- # xtrace_disable 00:34:51.376 06:45:22 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:51.376 ************************************ 00:34:51.376 END TEST nvmf_identify_passthru 00:34:51.376 ************************************ 00:34:51.376 06:45:22 -- spdk/autotest.sh@285 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:51.376 06:45:22 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:51.376 06:45:22 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:51.376 06:45:22 -- common/autotest_common.sh@10 -- # set +x 00:34:51.376 ************************************ 00:34:51.376 START TEST nvmf_dif 00:34:51.376 ************************************ 00:34:51.376 06:45:22 nvmf_dif -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:34:51.376 * Looking for test storage... 00:34:51.376 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:51.376 06:45:23 nvmf_dif -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:34:51.376 06:45:23 nvmf_dif -- common/autotest_common.sh@1691 -- # lcov --version 00:34:51.376 06:45:23 nvmf_dif -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:34:51.376 06:45:23 nvmf_dif -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:34:51.376 06:45:23 nvmf_dif -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:51.376 06:45:23 nvmf_dif -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:34:51.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.376 --rc genhtml_branch_coverage=1 00:34:51.376 --rc genhtml_function_coverage=1 00:34:51.376 --rc genhtml_legend=1 00:34:51.376 --rc geninfo_all_blocks=1 00:34:51.376 --rc geninfo_unexecuted_blocks=1 00:34:51.376 00:34:51.376 ' 00:34:51.376 06:45:23 nvmf_dif -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:34:51.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.376 --rc genhtml_branch_coverage=1 00:34:51.376 --rc genhtml_function_coverage=1 00:34:51.376 --rc genhtml_legend=1 00:34:51.376 --rc geninfo_all_blocks=1 00:34:51.376 --rc geninfo_unexecuted_blocks=1 00:34:51.376 00:34:51.376 ' 00:34:51.376 06:45:23 nvmf_dif -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:34:51.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.376 --rc genhtml_branch_coverage=1 00:34:51.376 --rc genhtml_function_coverage=1 00:34:51.376 --rc genhtml_legend=1 00:34:51.376 --rc geninfo_all_blocks=1 00:34:51.376 --rc geninfo_unexecuted_blocks=1 00:34:51.376 00:34:51.376 ' 00:34:51.376 06:45:23 nvmf_dif -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:34:51.376 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:51.376 --rc genhtml_branch_coverage=1 00:34:51.376 --rc genhtml_function_coverage=1 00:34:51.376 --rc genhtml_legend=1 00:34:51.376 --rc geninfo_all_blocks=1 00:34:51.376 --rc geninfo_unexecuted_blocks=1 00:34:51.376 00:34:51.376 ' 00:34:51.376 06:45:23 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:51.376 06:45:23 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:51.376 06:45:23 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:51.376 06:45:23 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.376 06:45:23 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.376 06:45:23 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.377 06:45:23 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:34:51.377 06:45:23 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:51.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:51.377 06:45:23 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:34:51.377 06:45:23 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:34:51.377 06:45:23 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:34:51.377 06:45:23 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:34:51.377 06:45:23 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:51.377 06:45:23 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:51.377 06:45:23 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:51.377 06:45:23 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:34:51.377 06:45:23 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:34:53.908 Found 0000:09:00.0 (0x8086 - 0x159b) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:34:53.908 Found 0000:09:00.1 (0x8086 - 0x159b) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:34:53.908 Found net devices under 0000:09:00.0: cvl_0_0 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:34:53.908 Found net devices under 0000:09:00.1: cvl_0_1 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:53.908 06:45:25 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:53.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:53.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.304 ms 00:34:53.909 00:34:53.909 --- 10.0.0.2 ping statistics --- 00:34:53.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.909 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:53.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:53.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:34:53.909 00:34:53.909 --- 10.0.0.1 ping statistics --- 00:34:53.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:53.909 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:34:53.909 06:45:25 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:54.842 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:54.842 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:54.842 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:54.842 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:54.842 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:54.842 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:54.842 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:54.843 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:54.843 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:34:54.843 0000:0b:00.0 (8086 0a54): Already using the vfio-pci driver 00:34:54.843 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:34:54.843 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:34:54.843 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:34:54.843 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:34:54.843 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:34:54.843 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:34:54.843 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:34:55.101 06:45:26 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:55.101 06:45:26 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:55.101 06:45:26 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:55.101 06:45:26 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:55.101 06:45:26 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:55.101 06:45:26 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:55.101 06:45:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:34:55.101 06:45:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:34:55.101 06:45:26 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:55.101 06:45:26 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:34:55.101 06:45:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:55.101 06:45:26 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=2267821 00:34:55.101 06:45:26 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:34:55.101 06:45:26 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 2267821 00:34:55.101 06:45:26 nvmf_dif -- common/autotest_common.sh@833 -- # '[' -z 2267821 ']' 00:34:55.101 06:45:26 nvmf_dif -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:55.101 06:45:26 nvmf_dif -- common/autotest_common.sh@838 -- # local max_retries=100 00:34:55.101 06:45:26 nvmf_dif -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:55.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:55.101 06:45:26 nvmf_dif -- common/autotest_common.sh@842 -- # xtrace_disable 00:34:55.101 06:45:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:55.101 [2024-11-20 06:45:26.820046] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:34:55.101 [2024-11-20 06:45:26.820135] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:55.101 [2024-11-20 06:45:26.893260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:55.359 [2024-11-20 06:45:26.955793] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:55.359 [2024-11-20 06:45:26.955854] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:55.359 [2024-11-20 06:45:26.955866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:55.359 [2024-11-20 06:45:26.955878] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:55.359 [2024-11-20 06:45:26.955903] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:55.359 [2024-11-20 06:45:26.956515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:55.359 06:45:27 nvmf_dif -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:34:55.359 06:45:27 nvmf_dif -- common/autotest_common.sh@866 -- # return 0 00:34:55.359 06:45:27 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:55.359 06:45:27 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:34:55.359 06:45:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:55.359 06:45:27 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:55.359 06:45:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:34:55.359 06:45:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:34:55.359 06:45:27 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.359 06:45:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:55.359 [2024-11-20 06:45:27.111731] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:55.359 06:45:27 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.359 06:45:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:34:55.359 06:45:27 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:34:55.359 06:45:27 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:34:55.359 06:45:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:55.359 ************************************ 00:34:55.359 START TEST fio_dif_1_default 00:34:55.359 ************************************ 00:34:55.359 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1127 -- # fio_dif_1 00:34:55.359 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:34:55.359 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:34:55.359 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:34:55.359 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:34:55.359 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:34:55.359 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:55.360 bdev_null0 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:55.360 [2024-11-20 06:45:27.171972] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:34:55.360 { 00:34:55.360 "params": { 00:34:55.360 "name": "Nvme$subsystem", 00:34:55.360 "trtype": "$TEST_TRANSPORT", 00:34:55.360 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:55.360 "adrfam": "ipv4", 00:34:55.360 "trsvcid": "$NVMF_PORT", 00:34:55.360 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:55.360 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:55.360 "hdgst": ${hdgst:-false}, 00:34:55.360 "ddgst": ${ddgst:-false} 00:34:55.360 }, 00:34:55.360 "method": "bdev_nvme_attach_controller" 00:34:55.360 } 00:34:55.360 EOF 00:34:55.360 )") 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local sanitizers 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # shift 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # local asan_lib= 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libasan 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:34:55.360 06:45:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:34:55.360 "params": { 00:34:55.360 "name": "Nvme0", 00:34:55.360 "trtype": "tcp", 00:34:55.360 "traddr": "10.0.0.2", 00:34:55.360 "adrfam": "ipv4", 00:34:55.360 "trsvcid": "4420", 00:34:55.360 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:55.360 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:55.360 "hdgst": false, 00:34:55.360 "ddgst": false 00:34:55.360 }, 00:34:55.360 "method": "bdev_nvme_attach_controller" 00:34:55.360 }' 00:34:55.618 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:55.618 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:55.618 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:34:55.618 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:55.618 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:34:55.618 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:34:55.618 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # asan_lib= 00:34:55.618 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:34:55.618 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:55.618 06:45:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:55.877 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:55.877 fio-3.35 00:34:55.877 Starting 1 thread 00:35:08.070 00:35:08.070 filename0: (groupid=0, jobs=1): err= 0: pid=2268054: Wed Nov 20 06:45:38 2024 00:35:08.070 read: IOPS=238, BW=952KiB/s (975kB/s)(9552KiB/10033msec) 00:35:08.070 slat (nsec): min=4322, max=70420, avg=8768.48, stdev=3645.47 00:35:08.070 clat (usec): min=497, max=42745, avg=16777.89, stdev=19855.37 00:35:08.070 lat (usec): min=504, max=42756, avg=16786.65, stdev=19855.42 00:35:08.070 clat percentiles (usec): 00:35:08.070 | 1.00th=[ 594], 5.00th=[ 635], 10.00th=[ 652], 20.00th=[ 685], 00:35:08.070 | 30.00th=[ 717], 40.00th=[ 742], 50.00th=[ 775], 60.00th=[ 840], 00:35:08.070 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:35:08.070 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:35:08.070 | 99.99th=[42730] 00:35:08.070 bw ( KiB/s): min= 768, max= 1728, per=100.00%, avg=953.60, stdev=213.68, samples=20 00:35:08.070 iops : min= 192, max= 432, avg=238.40, stdev=53.42, samples=20 00:35:08.070 lat (usec) : 500=0.04%, 750=43.22%, 1000=17.04% 00:35:08.070 lat (msec) : 10=0.17%, 50=39.53% 00:35:08.070 cpu : usr=91.04%, sys=8.68%, ctx=16, majf=0, minf=230 00:35:08.070 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:08.070 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.070 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:08.070 issued rwts: total=2388,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:08.070 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:08.070 00:35:08.070 Run status group 0 (all jobs): 00:35:08.070 READ: bw=952KiB/s (975kB/s), 952KiB/s-952KiB/s (975kB/s-975kB/s), io=9552KiB (9781kB), run=10033-10033msec 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.070 00:35:08.070 real 0m11.372s 00:35:08.070 user 0m10.375s 00:35:08.070 sys 0m1.196s 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 ************************************ 00:35:08.070 END TEST fio_dif_1_default 00:35:08.070 ************************************ 00:35:08.070 06:45:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:08.070 06:45:38 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:08.070 06:45:38 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 ************************************ 00:35:08.070 START TEST fio_dif_1_multi_subsystems 00:35:08.070 ************************************ 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1127 -- # fio_dif_1_multi_subsystems 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 bdev_null0 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 [2024-11-20 06:45:38.598268] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 bdev_null1 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.070 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:08.070 { 00:35:08.070 "params": { 00:35:08.070 "name": "Nvme$subsystem", 00:35:08.070 "trtype": "$TEST_TRANSPORT", 00:35:08.070 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.071 "adrfam": "ipv4", 00:35:08.071 "trsvcid": "$NVMF_PORT", 00:35:08.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.071 "hdgst": ${hdgst:-false}, 00:35:08.071 "ddgst": ${ddgst:-false} 00:35:08.071 }, 00:35:08.071 "method": "bdev_nvme_attach_controller" 00:35:08.071 } 00:35:08.071 EOF 00:35:08.071 )") 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # shift 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libasan 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:08.071 { 00:35:08.071 "params": { 00:35:08.071 "name": "Nvme$subsystem", 00:35:08.071 "trtype": "$TEST_TRANSPORT", 00:35:08.071 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:08.071 "adrfam": "ipv4", 00:35:08.071 "trsvcid": "$NVMF_PORT", 00:35:08.071 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:08.071 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:08.071 "hdgst": ${hdgst:-false}, 00:35:08.071 "ddgst": ${ddgst:-false} 00:35:08.071 }, 00:35:08.071 "method": "bdev_nvme_attach_controller" 00:35:08.071 } 00:35:08.071 EOF 00:35:08.071 )") 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:08.071 "params": { 00:35:08.071 "name": "Nvme0", 00:35:08.071 "trtype": "tcp", 00:35:08.071 "traddr": "10.0.0.2", 00:35:08.071 "adrfam": "ipv4", 00:35:08.071 "trsvcid": "4420", 00:35:08.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:08.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:08.071 "hdgst": false, 00:35:08.071 "ddgst": false 00:35:08.071 }, 00:35:08.071 "method": "bdev_nvme_attach_controller" 00:35:08.071 },{ 00:35:08.071 "params": { 00:35:08.071 "name": "Nvme1", 00:35:08.071 "trtype": "tcp", 00:35:08.071 "traddr": "10.0.0.2", 00:35:08.071 "adrfam": "ipv4", 00:35:08.071 "trsvcid": "4420", 00:35:08.071 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:08.071 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:08.071 "hdgst": false, 00:35:08.071 "ddgst": false 00:35:08.071 }, 00:35:08.071 "method": "bdev_nvme_attach_controller" 00:35:08.071 }' 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:08.071 06:45:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:08.071 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:08.071 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:08.071 fio-3.35 00:35:08.071 Starting 2 threads 00:35:18.125 00:35:18.125 filename0: (groupid=0, jobs=1): err= 0: pid=2269464: Wed Nov 20 06:45:49 2024 00:35:18.125 read: IOPS=146, BW=585KiB/s (599kB/s)(5856KiB/10012msec) 00:35:18.125 slat (nsec): min=7777, max=28436, avg=9716.13, stdev=2579.21 00:35:18.125 clat (usec): min=557, max=43496, avg=27323.78, stdev=19238.74 00:35:18.125 lat (usec): min=565, max=43519, avg=27333.49, stdev=19238.73 00:35:18.125 clat percentiles (usec): 00:35:18.125 | 1.00th=[ 578], 5.00th=[ 586], 10.00th=[ 603], 20.00th=[ 619], 00:35:18.125 | 30.00th=[ 668], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:35:18.125 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:18.125 | 99.00th=[42206], 99.50th=[42206], 99.90th=[43254], 99.95th=[43254], 00:35:18.125 | 99.99th=[43254] 00:35:18.125 bw ( KiB/s): min= 384, max= 832, per=41.87%, avg=584.00, stdev=196.68, samples=20 00:35:18.125 iops : min= 96, max= 208, avg=146.00, stdev=49.17, samples=20 00:35:18.125 lat (usec) : 750=34.15% 00:35:18.125 lat (msec) : 50=65.85% 00:35:18.125 cpu : usr=94.99%, sys=4.71%, ctx=14, majf=0, minf=114 00:35:18.125 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:18.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.125 issued rwts: total=1464,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.125 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:18.125 filename1: (groupid=0, jobs=1): err= 0: pid=2269465: Wed Nov 20 06:45:49 2024 00:35:18.125 read: IOPS=202, BW=811KiB/s (831kB/s)(8144KiB/10037msec) 00:35:18.125 slat (nsec): min=6591, max=28373, avg=9665.53, stdev=2493.85 00:35:18.125 clat (usec): min=510, max=42414, avg=19688.25, stdev=20342.41 00:35:18.125 lat (usec): min=518, max=42427, avg=19697.92, stdev=20342.26 00:35:18.125 clat percentiles (usec): 00:35:18.125 | 1.00th=[ 537], 5.00th=[ 570], 10.00th=[ 586], 20.00th=[ 603], 00:35:18.125 | 30.00th=[ 619], 40.00th=[ 652], 50.00th=[ 676], 60.00th=[41157], 00:35:18.125 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:35:18.125 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:35:18.125 | 99.99th=[42206] 00:35:18.126 bw ( KiB/s): min= 704, max= 896, per=58.21%, avg=812.80, stdev=51.28, samples=20 00:35:18.126 iops : min= 176, max= 224, avg=203.20, stdev=12.82, samples=20 00:35:18.126 lat (usec) : 750=52.06%, 1000=0.98% 00:35:18.126 lat (msec) : 4=0.20%, 50=46.76% 00:35:18.126 cpu : usr=94.63%, sys=5.07%, ctx=14, majf=0, minf=173 00:35:18.126 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:18.126 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.126 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:18.126 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:18.126 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:18.126 00:35:18.126 Run status group 0 (all jobs): 00:35:18.126 READ: bw=1395KiB/s (1428kB/s), 585KiB/s-811KiB/s (599kB/s-831kB/s), io=13.7MiB (14.3MB), run=10012-10037msec 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.126 00:35:18.126 real 0m11.346s 00:35:18.126 user 0m20.355s 00:35:18.126 sys 0m1.282s 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:18.126 06:45:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:18.126 ************************************ 00:35:18.126 END TEST fio_dif_1_multi_subsystems 00:35:18.126 ************************************ 00:35:18.126 06:45:49 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:18.126 06:45:49 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:18.126 06:45:49 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:18.126 06:45:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:18.384 ************************************ 00:35:18.384 START TEST fio_dif_rand_params 00:35:18.384 ************************************ 00:35:18.384 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1127 -- # fio_dif_rand_params 00:35:18.384 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:18.384 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:18.384 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:18.384 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:18.384 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:18.384 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:18.384 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:18.384 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.385 bdev_null0 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:18.385 [2024-11-20 06:45:49.991937] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:18.385 { 00:35:18.385 "params": { 00:35:18.385 "name": "Nvme$subsystem", 00:35:18.385 "trtype": "$TEST_TRANSPORT", 00:35:18.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:18.385 "adrfam": "ipv4", 00:35:18.385 "trsvcid": "$NVMF_PORT", 00:35:18.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:18.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:18.385 "hdgst": ${hdgst:-false}, 00:35:18.385 "ddgst": ${ddgst:-false} 00:35:18.385 }, 00:35:18.385 "method": "bdev_nvme_attach_controller" 00:35:18.385 } 00:35:18.385 EOF 00:35:18.385 )") 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:18.385 06:45:49 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:18.385 "params": { 00:35:18.385 "name": "Nvme0", 00:35:18.385 "trtype": "tcp", 00:35:18.385 "traddr": "10.0.0.2", 00:35:18.385 "adrfam": "ipv4", 00:35:18.385 "trsvcid": "4420", 00:35:18.385 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:18.385 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:18.385 "hdgst": false, 00:35:18.385 "ddgst": false 00:35:18.385 }, 00:35:18.385 "method": "bdev_nvme_attach_controller" 00:35:18.385 }' 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:18.385 06:45:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:18.641 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:18.641 ... 00:35:18.641 fio-3.35 00:35:18.641 Starting 3 threads 00:35:25.196 00:35:25.197 filename0: (groupid=0, jobs=1): err= 0: pid=2270893: Wed Nov 20 06:45:56 2024 00:35:25.197 read: IOPS=213, BW=26.7MiB/s (28.0MB/s)(135MiB/5047msec) 00:35:25.197 slat (nsec): min=4085, max=48546, avg=14831.86, stdev=3895.62 00:35:25.197 clat (usec): min=4651, max=54095, avg=13986.80, stdev=7255.46 00:35:25.197 lat (usec): min=4663, max=54107, avg=14001.63, stdev=7255.22 00:35:25.197 clat percentiles (usec): 00:35:25.197 | 1.00th=[ 5276], 5.00th=[ 8455], 10.00th=[ 9372], 20.00th=[11469], 00:35:25.197 | 30.00th=[11994], 40.00th=[12387], 50.00th=[12780], 60.00th=[13304], 00:35:25.197 | 70.00th=[13960], 80.00th=[14877], 90.00th=[15926], 95.00th=[17171], 00:35:25.197 | 99.00th=[52167], 99.50th=[53216], 99.90th=[53216], 99.95th=[54264], 00:35:25.197 | 99.99th=[54264] 00:35:25.197 bw ( KiB/s): min=21803, max=32256, per=31.86%, avg=27524.30, stdev=2971.96, samples=10 00:35:25.197 iops : min= 170, max= 252, avg=215.00, stdev=23.29, samples=10 00:35:25.197 lat (msec) : 10=12.15%, 20=84.32%, 50=1.67%, 100=1.86% 00:35:25.197 cpu : usr=95.32%, sys=4.22%, ctx=10, majf=0, minf=108 00:35:25.197 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.197 issued rwts: total=1078,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.197 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:25.197 filename0: (groupid=0, jobs=1): err= 0: pid=2270894: Wed Nov 20 06:45:56 2024 00:35:25.197 read: IOPS=236, BW=29.6MiB/s (31.0MB/s)(149MiB/5044msec) 00:35:25.197 slat (nsec): min=4636, max=80904, avg=15369.06, stdev=4803.03 00:35:25.197 clat (usec): min=5100, max=55681, avg=12593.87, stdev=6234.32 00:35:25.197 lat (usec): min=5112, max=55716, avg=12609.23, stdev=6234.26 00:35:25.197 clat percentiles (usec): 00:35:25.197 | 1.00th=[ 5407], 5.00th=[ 8160], 10.00th=[ 8717], 20.00th=[10159], 00:35:25.197 | 30.00th=[11207], 40.00th=[11731], 50.00th=[12125], 60.00th=[12387], 00:35:25.197 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14091], 95.00th=[15008], 00:35:25.197 | 99.00th=[51119], 99.50th=[52691], 99.90th=[55837], 99.95th=[55837], 00:35:25.197 | 99.99th=[55837] 00:35:25.197 bw ( KiB/s): min=24625, max=33280, per=35.27%, avg=30468.90, stdev=2726.97, samples=10 00:35:25.197 iops : min= 192, max= 260, avg=238.00, stdev=21.40, samples=10 00:35:25.197 lat (msec) : 10=18.52%, 20=79.13%, 50=0.84%, 100=1.51% 00:35:25.197 cpu : usr=94.80%, sys=4.72%, ctx=11, majf=0, minf=80 00:35:25.197 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.197 issued rwts: total=1193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.197 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:25.197 filename0: (groupid=0, jobs=1): err= 0: pid=2270895: Wed Nov 20 06:45:56 2024 00:35:25.197 read: IOPS=226, BW=28.3MiB/s (29.7MB/s)(142MiB/5006msec) 00:35:25.197 slat (nsec): min=4394, max=99724, avg=21299.87, stdev=6211.55 00:35:25.197 clat (usec): min=4990, max=58512, avg=13204.57, stdev=7086.06 00:35:25.197 lat (usec): min=4998, max=58545, avg=13225.87, stdev=7085.61 00:35:25.197 clat percentiles (usec): 00:35:25.197 | 1.00th=[ 7242], 5.00th=[ 8160], 10.00th=[ 8848], 20.00th=[10683], 00:35:25.197 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12256], 60.00th=[12649], 00:35:25.197 | 70.00th=[13042], 80.00th=[13829], 90.00th=[14746], 95.00th=[15926], 00:35:25.197 | 99.00th=[51643], 99.50th=[53216], 99.90th=[58459], 99.95th=[58459], 00:35:25.197 | 99.99th=[58459] 00:35:25.197 bw ( KiB/s): min=20736, max=32512, per=33.55%, avg=28979.20, stdev=3306.71, samples=10 00:35:25.197 iops : min= 162, max= 254, avg=226.40, stdev=25.83, samples=10 00:35:25.197 lat (msec) : 10=14.89%, 20=81.94%, 50=1.41%, 100=1.76% 00:35:25.197 cpu : usr=94.91%, sys=4.54%, ctx=12, majf=0, minf=143 00:35:25.197 IO depths : 1=1.0%, 2=99.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:25.197 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.197 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:25.197 issued rwts: total=1135,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:25.197 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:25.197 00:35:25.197 Run status group 0 (all jobs): 00:35:25.197 READ: bw=84.4MiB/s (88.5MB/s), 26.7MiB/s-29.6MiB/s (28.0MB/s-31.0MB/s), io=426MiB (446MB), run=5006-5047msec 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.197 bdev_null0 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.197 [2024-11-20 06:45:56.361731] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.197 bdev_null1 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:25.197 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.198 bdev_null2 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:25.198 { 00:35:25.198 "params": { 00:35:25.198 "name": "Nvme$subsystem", 00:35:25.198 "trtype": "$TEST_TRANSPORT", 00:35:25.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.198 "adrfam": "ipv4", 00:35:25.198 "trsvcid": "$NVMF_PORT", 00:35:25.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.198 "hdgst": ${hdgst:-false}, 00:35:25.198 "ddgst": ${ddgst:-false} 00:35:25.198 }, 00:35:25.198 "method": "bdev_nvme_attach_controller" 00:35:25.198 } 00:35:25.198 EOF 00:35:25.198 )") 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:25.198 { 00:35:25.198 "params": { 00:35:25.198 "name": "Nvme$subsystem", 00:35:25.198 "trtype": "$TEST_TRANSPORT", 00:35:25.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.198 "adrfam": "ipv4", 00:35:25.198 "trsvcid": "$NVMF_PORT", 00:35:25.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.198 "hdgst": ${hdgst:-false}, 00:35:25.198 "ddgst": ${ddgst:-false} 00:35:25.198 }, 00:35:25.198 "method": "bdev_nvme_attach_controller" 00:35:25.198 } 00:35:25.198 EOF 00:35:25.198 )") 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:25.198 { 00:35:25.198 "params": { 00:35:25.198 "name": "Nvme$subsystem", 00:35:25.198 "trtype": "$TEST_TRANSPORT", 00:35:25.198 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:25.198 "adrfam": "ipv4", 00:35:25.198 "trsvcid": "$NVMF_PORT", 00:35:25.198 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:25.198 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:25.198 "hdgst": ${hdgst:-false}, 00:35:25.198 "ddgst": ${ddgst:-false} 00:35:25.198 }, 00:35:25.198 "method": "bdev_nvme_attach_controller" 00:35:25.198 } 00:35:25.198 EOF 00:35:25.198 )") 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:25.198 "params": { 00:35:25.198 "name": "Nvme0", 00:35:25.198 "trtype": "tcp", 00:35:25.198 "traddr": "10.0.0.2", 00:35:25.198 "adrfam": "ipv4", 00:35:25.198 "trsvcid": "4420", 00:35:25.198 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:25.198 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:25.198 "hdgst": false, 00:35:25.198 "ddgst": false 00:35:25.198 }, 00:35:25.198 "method": "bdev_nvme_attach_controller" 00:35:25.198 },{ 00:35:25.198 "params": { 00:35:25.198 "name": "Nvme1", 00:35:25.198 "trtype": "tcp", 00:35:25.198 "traddr": "10.0.0.2", 00:35:25.198 "adrfam": "ipv4", 00:35:25.198 "trsvcid": "4420", 00:35:25.198 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:25.198 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:25.198 "hdgst": false, 00:35:25.198 "ddgst": false 00:35:25.198 }, 00:35:25.198 "method": "bdev_nvme_attach_controller" 00:35:25.198 },{ 00:35:25.198 "params": { 00:35:25.198 "name": "Nvme2", 00:35:25.198 "trtype": "tcp", 00:35:25.198 "traddr": "10.0.0.2", 00:35:25.198 "adrfam": "ipv4", 00:35:25.198 "trsvcid": "4420", 00:35:25.198 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:25.198 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:25.198 "hdgst": false, 00:35:25.198 "ddgst": false 00:35:25.198 }, 00:35:25.198 "method": "bdev_nvme_attach_controller" 00:35:25.198 }' 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:25.198 06:45:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:25.198 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:25.198 ... 00:35:25.198 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:25.198 ... 00:35:25.198 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:25.198 ... 00:35:25.198 fio-3.35 00:35:25.198 Starting 24 threads 00:35:37.391 00:35:37.391 filename0: (groupid=0, jobs=1): err= 0: pid=2271719: Wed Nov 20 06:46:07 2024 00:35:37.392 read: IOPS=473, BW=1895KiB/s (1940kB/s)(18.6MiB/10031msec) 00:35:37.392 slat (usec): min=8, max=106, avg=41.26, stdev=17.60 00:35:37.392 clat (usec): min=15376, max=49443, avg=33421.06, stdev=2091.52 00:35:37.392 lat (usec): min=15419, max=49503, avg=33462.32, stdev=2092.25 00:35:37.392 clat percentiles (usec): 00:35:37.392 | 1.00th=[20579], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.392 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:37.392 | 99.00th=[35390], 99.50th=[46924], 99.90th=[48497], 99.95th=[48497], 00:35:37.392 | 99.99th=[49546] 00:35:37.392 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1894.40, stdev=52.53, samples=20 00:35:37.392 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:37.392 lat (msec) : 20=0.84%, 50=99.16% 00:35:37.392 cpu : usr=97.56%, sys=1.74%, ctx=51, majf=0, minf=26 00:35:37.392 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:37.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.392 filename0: (groupid=0, jobs=1): err= 0: pid=2271720: Wed Nov 20 06:46:07 2024 00:35:37.392 read: IOPS=473, BW=1895KiB/s (1940kB/s)(18.6MiB/10031msec) 00:35:37.392 slat (nsec): min=8793, max=77215, avg=35026.38, stdev=9771.96 00:35:37.392 clat (usec): min=14890, max=50573, avg=33472.35, stdev=2410.17 00:35:37.392 lat (usec): min=14915, max=50623, avg=33507.38, stdev=2410.78 00:35:37.392 clat percentiles (usec): 00:35:37.392 | 1.00th=[19792], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.392 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:37.392 | 99.00th=[40109], 99.50th=[47449], 99.90th=[50594], 99.95th=[50594], 00:35:37.392 | 99.99th=[50594] 00:35:37.392 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1894.40, stdev=48.81, samples=20 00:35:37.392 iops : min= 448, max= 480, avg=473.60, stdev=12.20, samples=20 00:35:37.392 lat (msec) : 20=1.01%, 50=98.82%, 100=0.17% 00:35:37.392 cpu : usr=97.91%, sys=1.41%, ctx=75, majf=0, minf=12 00:35:37.392 IO depths : 1=4.7%, 2=10.9%, 4=24.9%, 8=51.6%, 16=7.8%, 32=0.0%, >=64=0.0% 00:35:37.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.392 filename0: (groupid=0, jobs=1): err= 0: pid=2271721: Wed Nov 20 06:46:07 2024 00:35:37.392 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10013msec) 00:35:37.392 slat (usec): min=9, max=109, avg=45.88, stdev=19.06 00:35:37.392 clat (usec): min=17133, max=57414, avg=33515.07, stdev=1771.65 00:35:37.392 lat (usec): min=17161, max=57448, avg=33560.94, stdev=1770.73 00:35:37.392 clat percentiles (usec): 00:35:37.392 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:35:37.392 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:37.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:37.392 | 99.00th=[35914], 99.50th=[39060], 99.90th=[57410], 99.95th=[57410], 00:35:37.392 | 99.99th=[57410] 00:35:37.392 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=73.12, samples=20 00:35:37.392 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:37.392 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:37.392 cpu : usr=96.82%, sys=2.10%, ctx=142, majf=0, minf=26 00:35:37.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:37.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.392 filename0: (groupid=0, jobs=1): err= 0: pid=2271722: Wed Nov 20 06:46:07 2024 00:35:37.392 read: IOPS=474, BW=1897KiB/s (1943kB/s)(18.6MiB/10018msec) 00:35:37.392 slat (usec): min=4, max=108, avg=28.39, stdev=26.32 00:35:37.392 clat (usec): min=13673, max=39563, avg=33486.58, stdev=1791.62 00:35:37.392 lat (usec): min=13684, max=39645, avg=33514.97, stdev=1789.76 00:35:37.392 clat percentiles (usec): 00:35:37.392 | 1.00th=[20841], 5.00th=[32637], 10.00th=[33162], 20.00th=[33424], 00:35:37.392 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:35:37.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.392 | 99.00th=[35390], 99.50th=[35914], 99.90th=[39060], 99.95th=[39584], 00:35:37.392 | 99.99th=[39584] 00:35:37.392 bw ( KiB/s): min= 1792, max= 2048, per=4.18%, avg=1894.40, stdev=66.96, samples=20 00:35:37.392 iops : min= 448, max= 512, avg=473.60, stdev=16.74, samples=20 00:35:37.392 lat (msec) : 20=0.72%, 50=99.28% 00:35:37.392 cpu : usr=98.24%, sys=1.33%, ctx=18, majf=0, minf=27 00:35:37.392 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:37.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.392 filename0: (groupid=0, jobs=1): err= 0: pid=2271723: Wed Nov 20 06:46:07 2024 00:35:37.392 read: IOPS=469, BW=1880KiB/s (1925kB/s)(18.4MiB/10011msec) 00:35:37.392 slat (usec): min=8, max=122, avg=58.58, stdev=29.71 00:35:37.392 clat (usec): min=19227, max=78473, avg=33517.44, stdev=3220.96 00:35:37.392 lat (usec): min=19265, max=78510, avg=33576.02, stdev=3218.22 00:35:37.392 clat percentiles (usec): 00:35:37.392 | 1.00th=[26346], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:35:37.392 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:35:37.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:37.392 | 99.00th=[46400], 99.50th=[47449], 99.90th=[78119], 99.95th=[78119], 00:35:37.392 | 99.99th=[78119] 00:35:37.392 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1875.20, stdev=75.15, samples=20 00:35:37.392 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:35:37.392 lat (msec) : 20=0.49%, 50=99.17%, 100=0.34% 00:35:37.392 cpu : usr=98.03%, sys=1.34%, ctx=59, majf=0, minf=15 00:35:37.392 IO depths : 1=5.7%, 2=11.9%, 4=24.9%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:35:37.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.392 filename0: (groupid=0, jobs=1): err= 0: pid=2271724: Wed Nov 20 06:46:07 2024 00:35:37.392 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10017msec) 00:35:37.392 slat (usec): min=8, max=115, avg=37.76, stdev=22.07 00:35:37.392 clat (usec): min=29359, max=49834, avg=33638.69, stdev=1016.21 00:35:37.392 lat (usec): min=29400, max=49851, avg=33676.45, stdev=1012.32 00:35:37.392 clat percentiles (usec): 00:35:37.392 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.392 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:37.392 | 99.00th=[36439], 99.50th=[39060], 99.90th=[47449], 99.95th=[47449], 00:35:37.392 | 99.99th=[50070] 00:35:37.392 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1881.60, stdev=60.18, samples=20 00:35:37.392 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:35:37.392 lat (msec) : 50=100.00% 00:35:37.392 cpu : usr=98.33%, sys=1.25%, ctx=29, majf=0, minf=16 00:35:37.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:37.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.392 filename0: (groupid=0, jobs=1): err= 0: pid=2271725: Wed Nov 20 06:46:07 2024 00:35:37.392 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10015msec) 00:35:37.392 slat (usec): min=10, max=115, avg=52.64, stdev=23.11 00:35:37.392 clat (usec): min=17182, max=59297, avg=33494.50, stdev=1857.23 00:35:37.392 lat (usec): min=17224, max=59334, avg=33547.14, stdev=1855.60 00:35:37.392 clat percentiles (usec): 00:35:37.392 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:37.392 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:37.392 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:37.392 | 99.00th=[35914], 99.50th=[39060], 99.90th=[58983], 99.95th=[58983], 00:35:37.392 | 99.99th=[59507] 00:35:37.392 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.20, stdev=72.92, samples=20 00:35:37.392 iops : min= 416, max= 480, avg=470.30, stdev=18.23, samples=20 00:35:37.392 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:37.392 cpu : usr=98.38%, sys=1.19%, ctx=14, majf=0, minf=16 00:35:37.392 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:37.392 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.392 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.392 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.392 filename0: (groupid=0, jobs=1): err= 0: pid=2271726: Wed Nov 20 06:46:07 2024 00:35:37.392 read: IOPS=471, BW=1885KiB/s (1931kB/s)(18.4MiB/10014msec) 00:35:37.392 slat (usec): min=9, max=124, avg=39.54, stdev=14.84 00:35:37.392 clat (usec): min=17148, max=58778, avg=33574.68, stdev=1834.30 00:35:37.392 lat (usec): min=17163, max=58792, avg=33614.23, stdev=1833.08 00:35:37.392 clat percentiles (usec): 00:35:37.392 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.392 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:37.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.393 | 99.00th=[35914], 99.50th=[39060], 99.90th=[58983], 99.95th=[58983], 00:35:37.393 | 99.99th=[58983] 00:35:37.393 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.20, stdev=72.92, samples=20 00:35:37.393 iops : min= 416, max= 480, avg=470.30, stdev=18.23, samples=20 00:35:37.393 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:37.393 cpu : usr=98.06%, sys=1.29%, ctx=70, majf=0, minf=32 00:35:37.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:37.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.393 filename1: (groupid=0, jobs=1): err= 0: pid=2271727: Wed Nov 20 06:46:07 2024 00:35:37.393 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10013msec) 00:35:37.393 slat (usec): min=12, max=100, avg=38.53, stdev=14.03 00:35:37.393 clat (usec): min=17181, max=57836, avg=33612.96, stdev=1778.83 00:35:37.393 lat (usec): min=17216, max=57869, avg=33651.49, stdev=1778.37 00:35:37.393 clat percentiles (usec): 00:35:37.393 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.393 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.393 | 99.00th=[35914], 99.50th=[39060], 99.90th=[57934], 99.95th=[57934], 00:35:37.393 | 99.99th=[57934] 00:35:37.393 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1881.35, stdev=72.45, samples=20 00:35:37.393 iops : min= 416, max= 480, avg=470.30, stdev=18.23, samples=20 00:35:37.393 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:37.393 cpu : usr=98.15%, sys=1.36%, ctx=22, majf=0, minf=26 00:35:37.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:37.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.393 filename1: (groupid=0, jobs=1): err= 0: pid=2271728: Wed Nov 20 06:46:07 2024 00:35:37.393 read: IOPS=473, BW=1895KiB/s (1940kB/s)(18.6MiB/10031msec) 00:35:37.393 slat (nsec): min=9173, max=82215, avg=34994.11, stdev=11579.48 00:35:37.393 clat (usec): min=14521, max=40240, avg=33477.17, stdev=1561.30 00:35:37.393 lat (usec): min=14543, max=40262, avg=33512.16, stdev=1560.70 00:35:37.393 clat percentiles (usec): 00:35:37.393 | 1.00th=[23462], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.393 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.393 | 99.00th=[35390], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:35:37.393 | 99.99th=[40109] 00:35:37.393 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1894.40, stdev=52.53, samples=20 00:35:37.393 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:37.393 lat (msec) : 20=0.34%, 50=99.66% 00:35:37.393 cpu : usr=97.85%, sys=1.40%, ctx=76, majf=0, minf=33 00:35:37.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:37.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.393 filename1: (groupid=0, jobs=1): err= 0: pid=2271729: Wed Nov 20 06:46:07 2024 00:35:37.393 read: IOPS=473, BW=1895KiB/s (1940kB/s)(18.6MiB/10031msec) 00:35:37.393 slat (nsec): min=7207, max=83280, avg=34212.81, stdev=10265.17 00:35:37.393 clat (usec): min=14400, max=40194, avg=33470.56, stdev=1571.08 00:35:37.393 lat (usec): min=14448, max=40216, avg=33504.77, stdev=1569.21 00:35:37.393 clat percentiles (usec): 00:35:37.393 | 1.00th=[23725], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.393 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.393 | 99.00th=[35390], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:35:37.393 | 99.99th=[40109] 00:35:37.393 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1894.40, stdev=52.53, samples=20 00:35:37.393 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:37.393 lat (msec) : 20=0.44%, 50=99.56% 00:35:37.393 cpu : usr=98.34%, sys=1.25%, ctx=14, majf=0, minf=18 00:35:37.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:37.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.393 filename1: (groupid=0, jobs=1): err= 0: pid=2271730: Wed Nov 20 06:46:07 2024 00:35:37.393 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10017msec) 00:35:37.393 slat (nsec): min=8210, max=95482, avg=35187.38, stdev=15859.65 00:35:37.393 clat (usec): min=30844, max=47912, avg=33671.51, stdev=1010.79 00:35:37.393 lat (usec): min=30886, max=47930, avg=33706.70, stdev=1008.28 00:35:37.393 clat percentiles (usec): 00:35:37.393 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.393 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.393 | 99.00th=[35914], 99.50th=[39060], 99.90th=[47973], 99.95th=[47973], 00:35:37.393 | 99.99th=[47973] 00:35:37.393 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1881.60, stdev=60.18, samples=20 00:35:37.393 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:35:37.393 lat (msec) : 50=100.00% 00:35:37.393 cpu : usr=98.42%, sys=1.14%, ctx=27, majf=0, minf=19 00:35:37.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:37.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.393 filename1: (groupid=0, jobs=1): err= 0: pid=2271731: Wed Nov 20 06:46:07 2024 00:35:37.393 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10012msec) 00:35:37.393 slat (usec): min=8, max=147, avg=48.59, stdev=20.83 00:35:37.393 clat (usec): min=16215, max=56904, avg=33508.91, stdev=1778.69 00:35:37.393 lat (usec): min=16286, max=56929, avg=33557.50, stdev=1775.04 00:35:37.393 clat percentiles (usec): 00:35:37.393 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:37.393 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:37.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:37.393 | 99.00th=[35914], 99.50th=[39060], 99.90th=[56886], 99.95th=[56886], 00:35:37.393 | 99.99th=[56886] 00:35:37.393 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=73.12, samples=20 00:35:37.393 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:37.393 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:37.393 cpu : usr=98.07%, sys=1.37%, ctx=56, majf=0, minf=28 00:35:37.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:37.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.393 filename1: (groupid=0, jobs=1): err= 0: pid=2271732: Wed Nov 20 06:46:07 2024 00:35:37.393 read: IOPS=471, BW=1885KiB/s (1931kB/s)(18.4MiB/10014msec) 00:35:37.393 slat (usec): min=7, max=114, avg=39.14, stdev=13.21 00:35:37.393 clat (usec): min=17140, max=58804, avg=33579.03, stdev=1831.48 00:35:37.393 lat (usec): min=17167, max=58822, avg=33618.18, stdev=1830.48 00:35:37.393 clat percentiles (usec): 00:35:37.393 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.393 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:37.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.393 | 99.00th=[35914], 99.50th=[39060], 99.90th=[58983], 99.95th=[58983], 00:35:37.393 | 99.99th=[58983] 00:35:37.393 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.20, stdev=72.92, samples=20 00:35:37.393 iops : min= 416, max= 480, avg=470.30, stdev=18.23, samples=20 00:35:37.393 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:37.393 cpu : usr=97.71%, sys=1.49%, ctx=86, majf=0, minf=31 00:35:37.393 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:37.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.393 filename1: (groupid=0, jobs=1): err= 0: pid=2271733: Wed Nov 20 06:46:07 2024 00:35:37.393 read: IOPS=471, BW=1888KiB/s (1933kB/s)(18.5MiB/10013msec) 00:35:37.393 slat (usec): min=7, max=110, avg=35.37, stdev=15.26 00:35:37.393 clat (usec): min=14363, max=57109, avg=33603.39, stdev=3465.93 00:35:37.393 lat (usec): min=14373, max=57136, avg=33638.75, stdev=3467.80 00:35:37.393 clat percentiles (usec): 00:35:37.393 | 1.00th=[19268], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.393 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.393 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:37.393 | 99.00th=[48497], 99.50th=[49546], 99.90th=[56886], 99.95th=[56886], 00:35:37.393 | 99.99th=[56886] 00:35:37.393 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1884.00, stdev=70.15, samples=20 00:35:37.393 iops : min= 416, max= 480, avg=471.00, stdev=17.54, samples=20 00:35:37.393 lat (msec) : 20=2.07%, 50=97.46%, 100=0.47% 00:35:37.393 cpu : usr=98.35%, sys=1.18%, ctx=55, majf=0, minf=23 00:35:37.393 IO depths : 1=3.8%, 2=8.8%, 4=22.3%, 8=55.8%, 16=9.3%, 32=0.0%, >=64=0.0% 00:35:37.393 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 complete : 0=0.0%, 4=93.6%, 8=1.2%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.393 issued rwts: total=4726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.393 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.394 filename1: (groupid=0, jobs=1): err= 0: pid=2271734: Wed Nov 20 06:46:07 2024 00:35:37.394 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10006msec) 00:35:37.394 slat (usec): min=8, max=119, avg=50.57, stdev=31.83 00:35:37.394 clat (usec): min=19866, max=47870, avg=33470.89, stdev=1210.73 00:35:37.394 lat (usec): min=19876, max=47881, avg=33521.46, stdev=1204.66 00:35:37.394 clat percentiles (usec): 00:35:37.394 | 1.00th=[32113], 5.00th=[32375], 10.00th=[32637], 20.00th=[32900], 00:35:37.394 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.394 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.394 | 99.00th=[36439], 99.50th=[38536], 99.90th=[47449], 99.95th=[47449], 00:35:37.394 | 99.99th=[47973] 00:35:37.394 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1886.32, stdev=57.91, samples=19 00:35:37.394 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:35:37.394 lat (msec) : 20=0.04%, 50=99.96% 00:35:37.394 cpu : usr=98.25%, sys=1.19%, ctx=69, majf=0, minf=18 00:35:37.394 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:35:37.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.394 filename2: (groupid=0, jobs=1): err= 0: pid=2271735: Wed Nov 20 06:46:07 2024 00:35:37.394 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10014msec) 00:35:37.394 slat (usec): min=5, max=121, avg=51.98, stdev=23.16 00:35:37.394 clat (usec): min=29981, max=53945, avg=33493.78, stdev=972.61 00:35:37.394 lat (usec): min=30067, max=54030, avg=33545.76, stdev=968.96 00:35:37.394 clat percentiles (usec): 00:35:37.394 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:35:37.394 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33424], 00:35:37.394 | 70.00th=[33817], 80.00th=[33817], 90.00th=[33817], 95.00th=[34341], 00:35:37.394 | 99.00th=[35390], 99.50th=[40109], 99.90th=[43254], 99.95th=[43254], 00:35:37.394 | 99.99th=[53740] 00:35:37.394 bw ( KiB/s): min= 1792, max= 1920, per=4.15%, avg=1880.80, stdev=59.75, samples=20 00:35:37.394 iops : min= 448, max= 480, avg=470.20, stdev=14.94, samples=20 00:35:37.394 lat (msec) : 50=99.96%, 100=0.04% 00:35:37.394 cpu : usr=98.12%, sys=1.30%, ctx=77, majf=0, minf=17 00:35:37.394 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:37.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 issued rwts: total=4718,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.394 filename2: (groupid=0, jobs=1): err= 0: pid=2271736: Wed Nov 20 06:46:07 2024 00:35:37.394 read: IOPS=473, BW=1895KiB/s (1940kB/s)(18.6MiB/10031msec) 00:35:37.394 slat (nsec): min=11761, max=83356, avg=36437.35, stdev=11164.05 00:35:37.394 clat (usec): min=14536, max=40036, avg=33448.60, stdev=1554.70 00:35:37.394 lat (usec): min=14570, max=40074, avg=33485.03, stdev=1554.66 00:35:37.394 clat percentiles (usec): 00:35:37.394 | 1.00th=[23725], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.394 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.394 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.394 | 99.00th=[35390], 99.50th=[35390], 99.90th=[40109], 99.95th=[40109], 00:35:37.394 | 99.99th=[40109] 00:35:37.394 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1894.40, stdev=52.53, samples=20 00:35:37.394 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:37.394 lat (msec) : 20=0.34%, 50=99.66% 00:35:37.394 cpu : usr=97.84%, sys=1.46%, ctx=65, majf=0, minf=20 00:35:37.394 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:37.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.394 filename2: (groupid=0, jobs=1): err= 0: pid=2271737: Wed Nov 20 06:46:07 2024 00:35:37.394 read: IOPS=471, BW=1885KiB/s (1931kB/s)(18.4MiB/10014msec) 00:35:37.394 slat (nsec): min=9031, max=90460, avg=36669.86, stdev=12634.64 00:35:37.394 clat (usec): min=29854, max=46749, avg=33637.43, stdev=874.11 00:35:37.394 lat (usec): min=29897, max=46778, avg=33674.10, stdev=872.19 00:35:37.394 clat percentiles (usec): 00:35:37.394 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.394 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.394 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:37.394 | 99.00th=[35914], 99.50th=[39060], 99.90th=[44827], 99.95th=[44827], 00:35:37.394 | 99.99th=[46924] 00:35:37.394 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1881.60, stdev=60.18, samples=20 00:35:37.394 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:35:37.394 lat (msec) : 50=100.00% 00:35:37.394 cpu : usr=97.81%, sys=1.53%, ctx=108, majf=0, minf=27 00:35:37.394 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:37.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.394 filename2: (groupid=0, jobs=1): err= 0: pid=2271738: Wed Nov 20 06:46:07 2024 00:35:37.394 read: IOPS=473, BW=1895KiB/s (1940kB/s)(18.6MiB/10031msec) 00:35:37.394 slat (usec): min=8, max=164, avg=47.96, stdev=28.73 00:35:37.394 clat (usec): min=15779, max=40235, avg=33364.60, stdev=1537.21 00:35:37.394 lat (usec): min=15796, max=40255, avg=33412.56, stdev=1534.14 00:35:37.394 clat percentiles (usec): 00:35:37.394 | 1.00th=[23200], 5.00th=[32375], 10.00th=[32637], 20.00th=[33162], 00:35:37.394 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.394 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.394 | 99.00th=[35390], 99.50th=[35914], 99.90th=[40109], 99.95th=[40109], 00:35:37.394 | 99.99th=[40109] 00:35:37.394 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1894.40, stdev=52.53, samples=20 00:35:37.394 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:37.394 lat (msec) : 20=0.34%, 50=99.66% 00:35:37.394 cpu : usr=97.82%, sys=1.32%, ctx=85, majf=0, minf=36 00:35:37.394 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:37.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.394 filename2: (groupid=0, jobs=1): err= 0: pid=2271739: Wed Nov 20 06:46:07 2024 00:35:37.394 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10017msec) 00:35:37.394 slat (nsec): min=8168, max=95633, avg=22277.56, stdev=14742.31 00:35:37.394 clat (usec): min=31224, max=47350, avg=33766.44, stdev=972.07 00:35:37.394 lat (usec): min=31256, max=47368, avg=33788.72, stdev=969.67 00:35:37.394 clat percentiles (usec): 00:35:37.394 | 1.00th=[32637], 5.00th=[33162], 10.00th=[33162], 20.00th=[33424], 00:35:37.394 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:35:37.394 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:37.394 | 99.00th=[35914], 99.50th=[39060], 99.90th=[47449], 99.95th=[47449], 00:35:37.394 | 99.99th=[47449] 00:35:37.394 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1881.75, stdev=59.95, samples=20 00:35:37.394 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:35:37.394 lat (msec) : 50=100.00% 00:35:37.394 cpu : usr=98.09%, sys=1.32%, ctx=60, majf=0, minf=31 00:35:37.394 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:37.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.394 filename2: (groupid=0, jobs=1): err= 0: pid=2271740: Wed Nov 20 06:46:07 2024 00:35:37.394 read: IOPS=473, BW=1895KiB/s (1940kB/s)(18.6MiB/10031msec) 00:35:37.394 slat (usec): min=8, max=121, avg=44.07, stdev=21.56 00:35:37.394 clat (usec): min=13296, max=50386, avg=33382.03, stdev=2112.18 00:35:37.394 lat (usec): min=13325, max=50408, avg=33426.10, stdev=2112.28 00:35:37.394 clat percentiles (usec): 00:35:37.394 | 1.00th=[20579], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:35:37.394 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.394 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:35:37.394 | 99.00th=[35390], 99.50th=[45876], 99.90th=[47973], 99.95th=[49021], 00:35:37.394 | 99.99th=[50594] 00:35:37.394 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1894.40, stdev=52.53, samples=20 00:35:37.394 iops : min= 448, max= 480, avg=473.60, stdev=13.13, samples=20 00:35:37.394 lat (msec) : 20=0.88%, 50=99.07%, 100=0.04% 00:35:37.394 cpu : usr=98.38%, sys=1.21%, ctx=13, majf=0, minf=22 00:35:37.394 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:35:37.394 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.394 issued rwts: total=4752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.394 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.394 filename2: (groupid=0, jobs=1): err= 0: pid=2271741: Wed Nov 20 06:46:07 2024 00:35:37.394 read: IOPS=469, BW=1880KiB/s (1925kB/s)(18.4MiB/10011msec) 00:35:37.394 slat (nsec): min=7982, max=97982, avg=17055.88, stdev=12090.97 00:35:37.394 clat (usec): min=19833, max=78542, avg=33878.78, stdev=2880.13 00:35:37.394 lat (usec): min=19844, max=78574, avg=33895.83, stdev=2880.98 00:35:37.394 clat percentiles (usec): 00:35:37.394 | 1.00th=[32900], 5.00th=[33162], 10.00th=[33424], 20.00th=[33424], 00:35:37.394 | 30.00th=[33424], 40.00th=[33817], 50.00th=[33817], 60.00th=[33817], 00:35:37.394 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.394 | 99.00th=[37487], 99.50th=[47449], 99.90th=[78119], 99.95th=[78119], 00:35:37.394 | 99.99th=[78119] 00:35:37.394 bw ( KiB/s): min= 1664, max= 1920, per=4.14%, avg=1875.20, stdev=75.15, samples=20 00:35:37.395 iops : min= 416, max= 480, avg=468.80, stdev=18.79, samples=20 00:35:37.395 lat (msec) : 20=0.15%, 50=99.51%, 100=0.34% 00:35:37.395 cpu : usr=97.91%, sys=1.43%, ctx=70, majf=0, minf=28 00:35:37.395 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:37.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.395 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.395 issued rwts: total=4704,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.395 filename2: (groupid=0, jobs=1): err= 0: pid=2271742: Wed Nov 20 06:46:07 2024 00:35:37.395 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10012msec) 00:35:37.395 slat (nsec): min=10790, max=82851, avg=38558.92, stdev=11454.06 00:35:37.395 clat (usec): min=17132, max=61651, avg=33591.04, stdev=1768.37 00:35:37.395 lat (usec): min=17170, max=61665, avg=33629.59, stdev=1767.85 00:35:37.395 clat percentiles (usec): 00:35:37.395 | 1.00th=[32900], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:35:37.395 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:35:37.395 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:35:37.395 | 99.00th=[35914], 99.50th=[39060], 99.90th=[56886], 99.95th=[56886], 00:35:37.395 | 99.99th=[61604] 00:35:37.395 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=73.12, samples=20 00:35:37.395 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:35:37.395 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:35:37.395 cpu : usr=98.36%, sys=1.23%, ctx=14, majf=0, minf=22 00:35:37.395 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:35:37.395 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.395 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:37.395 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:37.395 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:37.395 00:35:37.395 Run status group 0 (all jobs): 00:35:37.395 READ: bw=44.2MiB/s (46.3MB/s), 1880KiB/s-1897KiB/s (1925kB/s-1943kB/s), io=443MiB (465MB), run=10006-10031msec 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 bdev_null0 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 [2024-11-20 06:46:08.067322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 bdev_null1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.395 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:37.395 { 00:35:37.395 "params": { 00:35:37.395 "name": "Nvme$subsystem", 00:35:37.395 "trtype": "$TEST_TRANSPORT", 00:35:37.395 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.395 "adrfam": "ipv4", 00:35:37.395 "trsvcid": "$NVMF_PORT", 00:35:37.395 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.395 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.396 "hdgst": ${hdgst:-false}, 00:35:37.396 "ddgst": ${ddgst:-false} 00:35:37.396 }, 00:35:37.396 "method": "bdev_nvme_attach_controller" 00:35:37.396 } 00:35:37.396 EOF 00:35:37.396 )") 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # shift 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libasan 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:37.396 { 00:35:37.396 "params": { 00:35:37.396 "name": "Nvme$subsystem", 00:35:37.396 "trtype": "$TEST_TRANSPORT", 00:35:37.396 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:37.396 "adrfam": "ipv4", 00:35:37.396 "trsvcid": "$NVMF_PORT", 00:35:37.396 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:37.396 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:37.396 "hdgst": ${hdgst:-false}, 00:35:37.396 "ddgst": ${ddgst:-false} 00:35:37.396 }, 00:35:37.396 "method": "bdev_nvme_attach_controller" 00:35:37.396 } 00:35:37.396 EOF 00:35:37.396 )") 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:37.396 "params": { 00:35:37.396 "name": "Nvme0", 00:35:37.396 "trtype": "tcp", 00:35:37.396 "traddr": "10.0.0.2", 00:35:37.396 "adrfam": "ipv4", 00:35:37.396 "trsvcid": "4420", 00:35:37.396 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:37.396 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:37.396 "hdgst": false, 00:35:37.396 "ddgst": false 00:35:37.396 }, 00:35:37.396 "method": "bdev_nvme_attach_controller" 00:35:37.396 },{ 00:35:37.396 "params": { 00:35:37.396 "name": "Nvme1", 00:35:37.396 "trtype": "tcp", 00:35:37.396 "traddr": "10.0.0.2", 00:35:37.396 "adrfam": "ipv4", 00:35:37.396 "trsvcid": "4420", 00:35:37.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:37.396 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:37.396 "hdgst": false, 00:35:37.396 "ddgst": false 00:35:37.396 }, 00:35:37.396 "method": "bdev_nvme_attach_controller" 00:35:37.396 }' 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:37.396 06:46:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:37.396 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:37.396 ... 00:35:37.396 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:37.396 ... 00:35:37.396 fio-3.35 00:35:37.396 Starting 4 threads 00:35:42.660 00:35:42.660 filename0: (groupid=0, jobs=1): err= 0: pid=2273118: Wed Nov 20 06:46:14 2024 00:35:42.660 read: IOPS=1868, BW=14.6MiB/s (15.3MB/s)(73.0MiB/5003msec) 00:35:42.660 slat (nsec): min=4079, max=34446, avg=13755.57, stdev=3957.25 00:35:42.660 clat (usec): min=751, max=7618, avg=4229.59, stdev=653.08 00:35:42.660 lat (usec): min=765, max=7634, avg=4243.35, stdev=652.54 00:35:42.660 clat percentiles (usec): 00:35:42.660 | 1.00th=[ 2671], 5.00th=[ 3654], 10.00th=[ 3916], 20.00th=[ 4015], 00:35:42.660 | 30.00th=[ 4047], 40.00th=[ 4080], 50.00th=[ 4080], 60.00th=[ 4113], 00:35:42.660 | 70.00th=[ 4178], 80.00th=[ 4293], 90.00th=[ 4948], 95.00th=[ 5604], 00:35:42.660 | 99.00th=[ 6783], 99.50th=[ 6980], 99.90th=[ 7308], 99.95th=[ 7373], 00:35:42.660 | 99.99th=[ 7635] 00:35:42.660 bw ( KiB/s): min=14416, max=15504, per=24.34%, avg=14946.90, stdev=333.38, samples=10 00:35:42.660 iops : min= 1802, max= 1938, avg=1868.30, stdev=41.67, samples=10 00:35:42.660 lat (usec) : 1000=0.16% 00:35:42.660 lat (msec) : 2=0.51%, 4=19.82%, 10=79.50% 00:35:42.660 cpu : usr=95.22%, sys=4.34%, ctx=7, majf=0, minf=9 00:35:42.660 IO depths : 1=0.1%, 2=17.6%, 4=55.6%, 8=26.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.660 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.660 issued rwts: total=9347,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:42.660 filename0: (groupid=0, jobs=1): err= 0: pid=2273119: Wed Nov 20 06:46:14 2024 00:35:42.660 read: IOPS=1988, BW=15.5MiB/s (16.3MB/s)(77.7MiB/5004msec) 00:35:42.660 slat (nsec): min=3988, max=37096, avg=12151.45, stdev=4292.34 00:35:42.660 clat (usec): min=745, max=7303, avg=3978.91, stdev=404.23 00:35:42.660 lat (usec): min=759, max=7317, avg=3991.06, stdev=404.64 00:35:42.660 clat percentiles (usec): 00:35:42.660 | 1.00th=[ 2900], 5.00th=[ 3359], 10.00th=[ 3490], 20.00th=[ 3752], 00:35:42.660 | 30.00th=[ 3916], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:35:42.660 | 70.00th=[ 4113], 80.00th=[ 4146], 90.00th=[ 4228], 95.00th=[ 4424], 00:35:42.660 | 99.00th=[ 5145], 99.50th=[ 5735], 99.90th=[ 7046], 99.95th=[ 7242], 00:35:42.660 | 99.99th=[ 7308] 00:35:42.660 bw ( KiB/s): min=15440, max=16608, per=25.91%, avg=15908.80, stdev=350.30, samples=10 00:35:42.660 iops : min= 1930, max= 2076, avg=1988.60, stdev=43.79, samples=10 00:35:42.660 lat (usec) : 750=0.01%, 1000=0.04% 00:35:42.660 lat (msec) : 2=0.25%, 4=38.81%, 10=60.89% 00:35:42.660 cpu : usr=90.13%, sys=6.90%, ctx=169, majf=0, minf=0 00:35:42.660 IO depths : 1=0.4%, 2=17.5%, 4=55.5%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.660 complete : 0=0.0%, 4=91.6%, 8=8.4%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.660 issued rwts: total=9951,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:42.660 filename1: (groupid=0, jobs=1): err= 0: pid=2273120: Wed Nov 20 06:46:14 2024 00:35:42.660 read: IOPS=1935, BW=15.1MiB/s (15.9MB/s)(75.6MiB/5003msec) 00:35:42.660 slat (nsec): min=3974, max=37437, avg=13138.26, stdev=3603.86 00:35:42.660 clat (usec): min=801, max=7418, avg=4085.22, stdev=509.13 00:35:42.660 lat (usec): min=815, max=7433, avg=4098.36, stdev=509.04 00:35:42.660 clat percentiles (usec): 00:35:42.660 | 1.00th=[ 3032], 5.00th=[ 3458], 10.00th=[ 3654], 20.00th=[ 3916], 00:35:42.660 | 30.00th=[ 3982], 40.00th=[ 4047], 50.00th=[ 4080], 60.00th=[ 4113], 00:35:42.660 | 70.00th=[ 4146], 80.00th=[ 4178], 90.00th=[ 4359], 95.00th=[ 4883], 00:35:42.660 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 7177], 99.95th=[ 7308], 00:35:42.660 | 99.99th=[ 7439] 00:35:42.660 bw ( KiB/s): min=14832, max=16112, per=25.21%, avg=15479.80, stdev=362.60, samples=10 00:35:42.660 iops : min= 1854, max= 2014, avg=1934.90, stdev=45.35, samples=10 00:35:42.660 lat (usec) : 1000=0.04% 00:35:42.660 lat (msec) : 2=0.38%, 4=30.55%, 10=69.02% 00:35:42.660 cpu : usr=94.00%, sys=5.24%, ctx=124, majf=0, minf=9 00:35:42.660 IO depths : 1=0.4%, 2=18.8%, 4=54.6%, 8=26.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.660 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.660 issued rwts: total=9681,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:42.660 filename1: (groupid=0, jobs=1): err= 0: pid=2273121: Wed Nov 20 06:46:14 2024 00:35:42.660 read: IOPS=1929, BW=15.1MiB/s (15.8MB/s)(76.0MiB/5044msec) 00:35:42.660 slat (nsec): min=4150, max=50738, avg=15532.94, stdev=5497.50 00:35:42.660 clat (usec): min=795, max=43375, avg=4054.68, stdev=657.81 00:35:42.660 lat (usec): min=808, max=43390, avg=4070.21, stdev=657.96 00:35:42.660 clat percentiles (usec): 00:35:42.660 | 1.00th=[ 2573], 5.00th=[ 3425], 10.00th=[ 3621], 20.00th=[ 3916], 00:35:42.660 | 30.00th=[ 3982], 40.00th=[ 4015], 50.00th=[ 4047], 60.00th=[ 4080], 00:35:42.660 | 70.00th=[ 4113], 80.00th=[ 4178], 90.00th=[ 4293], 95.00th=[ 4883], 00:35:42.660 | 99.00th=[ 6259], 99.50th=[ 6718], 99.90th=[ 7242], 99.95th=[ 7504], 00:35:42.660 | 99.99th=[43254] 00:35:42.660 bw ( KiB/s): min=14944, max=16384, per=25.36%, avg=15568.00, stdev=367.19, samples=10 00:35:42.660 iops : min= 1868, max= 2048, avg=1946.00, stdev=45.90, samples=10 00:35:42.660 lat (usec) : 1000=0.11% 00:35:42.660 lat (msec) : 2=0.54%, 4=33.28%, 10=66.06%, 50=0.01% 00:35:42.660 cpu : usr=90.26%, sys=7.02%, ctx=398, majf=0, minf=9 00:35:42.660 IO depths : 1=0.6%, 2=20.3%, 4=53.1%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:42.660 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.660 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:42.660 issued rwts: total=9731,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:42.660 latency : target=0, window=0, percentile=100.00%, depth=8 00:35:42.660 00:35:42.660 Run status group 0 (all jobs): 00:35:42.660 READ: bw=60.0MiB/s (62.9MB/s), 14.6MiB/s-15.5MiB/s (15.3MB/s-16.3MB/s), io=302MiB (317MB), run=5003-5044msec 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.919 00:35:42.919 real 0m24.571s 00:35:42.919 user 4m33.518s 00:35:42.919 sys 0m6.174s 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:42.919 06:46:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:42.919 ************************************ 00:35:42.919 END TEST fio_dif_rand_params 00:35:42.920 ************************************ 00:35:42.920 06:46:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:35:42.920 06:46:14 nvmf_dif -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:42.920 06:46:14 nvmf_dif -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:42.920 06:46:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:42.920 ************************************ 00:35:42.920 START TEST fio_dif_digest 00:35:42.920 ************************************ 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1127 -- # fio_dif_digest 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:42.920 bdev_null0 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:42.920 [2024-11-20 06:46:14.616456] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:35:42.920 { 00:35:42.920 "params": { 00:35:42.920 "name": "Nvme$subsystem", 00:35:42.920 "trtype": "$TEST_TRANSPORT", 00:35:42.920 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:42.920 "adrfam": "ipv4", 00:35:42.920 "trsvcid": "$NVMF_PORT", 00:35:42.920 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:42.920 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:42.920 "hdgst": ${hdgst:-false}, 00:35:42.920 "ddgst": ${ddgst:-false} 00:35:42.920 }, 00:35:42.920 "method": "bdev_nvme_attach_controller" 00:35:42.920 } 00:35:42.920 EOF 00:35:42.920 )") 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1358 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local fio_dir=/usr/src/fio 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local sanitizers 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1342 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # shift 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # local asan_lib= 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libasan 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:35:42.920 "params": { 00:35:42.920 "name": "Nvme0", 00:35:42.920 "trtype": "tcp", 00:35:42.920 "traddr": "10.0.0.2", 00:35:42.920 "adrfam": "ipv4", 00:35:42.920 "trsvcid": "4420", 00:35:42.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:42.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:42.920 "hdgst": true, 00:35:42.920 "ddgst": true 00:35:42.920 }, 00:35:42.920 "method": "bdev_nvme_attach_controller" 00:35:42.920 }' 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # for sanitizer in "${sanitizers[@]}" 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # grep libclang_rt.asan 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # awk '{print $3}' 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # asan_lib= 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # [[ -n '' ]] 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:42.920 06:46:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1354 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:43.178 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:43.178 ... 00:35:43.178 fio-3.35 00:35:43.178 Starting 3 threads 00:35:55.371 00:35:55.371 filename0: (groupid=0, jobs=1): err= 0: pid=2273991: Wed Nov 20 06:46:25 2024 00:35:55.371 read: IOPS=195, BW=24.4MiB/s (25.6MB/s)(246MiB/10048msec) 00:35:55.371 slat (nsec): min=8051, max=57429, avg=19471.05, stdev=4347.28 00:35:55.371 clat (usec): min=10694, max=57708, avg=15295.08, stdev=2757.53 00:35:55.372 lat (usec): min=10719, max=57729, avg=15314.55, stdev=2757.40 00:35:55.372 clat percentiles (usec): 00:35:55.372 | 1.00th=[11731], 5.00th=[12649], 10.00th=[13042], 20.00th=[13566], 00:35:55.372 | 30.00th=[13960], 40.00th=[14353], 50.00th=[14877], 60.00th=[15533], 00:35:55.372 | 70.00th=[16319], 80.00th=[16909], 90.00th=[17695], 95.00th=[18482], 00:35:55.372 | 99.00th=[19530], 99.50th=[19792], 99.90th=[57410], 99.95th=[57934], 00:35:55.372 | 99.99th=[57934] 00:35:55.372 bw ( KiB/s): min=20992, max=27904, per=34.17%, avg=25113.60, stdev=2304.60, samples=20 00:35:55.372 iops : min= 164, max= 218, avg=196.20, stdev=18.00, samples=20 00:35:55.372 lat (msec) : 20=99.54%, 50=0.20%, 100=0.25% 00:35:55.372 cpu : usr=95.31%, sys=4.17%, ctx=31, majf=0, minf=158 00:35:55.372 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.372 issued rwts: total=1965,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.372 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.372 filename0: (groupid=0, jobs=1): err= 0: pid=2273992: Wed Nov 20 06:46:25 2024 00:35:55.372 read: IOPS=192, BW=24.1MiB/s (25.2MB/s)(242MiB/10046msec) 00:35:55.372 slat (nsec): min=7924, max=71612, avg=16202.74, stdev=3968.59 00:35:55.372 clat (usec): min=10516, max=55362, avg=15548.91, stdev=2042.09 00:35:55.372 lat (usec): min=10530, max=55381, avg=15565.11, stdev=2042.02 00:35:55.372 clat percentiles (usec): 00:35:55.372 | 1.00th=[11994], 5.00th=[13042], 10.00th=[13566], 20.00th=[14091], 00:35:55.372 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15270], 60.00th=[15926], 00:35:55.372 | 70.00th=[16581], 80.00th=[17171], 90.00th=[17695], 95.00th=[18482], 00:35:55.372 | 99.00th=[19530], 99.50th=[19792], 99.90th=[46924], 99.95th=[55313], 00:35:55.372 | 99.99th=[55313] 00:35:55.372 bw ( KiB/s): min=22272, max=26880, per=33.63%, avg=24716.80, stdev=1894.86, samples=20 00:35:55.372 iops : min= 174, max= 210, avg=193.10, stdev=14.80, samples=20 00:35:55.372 lat (msec) : 20=99.74%, 50=0.21%, 100=0.05% 00:35:55.372 cpu : usr=95.43%, sys=4.07%, ctx=17, majf=0, minf=196 00:35:55.372 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.372 issued rwts: total=1933,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.372 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.372 filename0: (groupid=0, jobs=1): err= 0: pid=2273993: Wed Nov 20 06:46:25 2024 00:35:55.372 read: IOPS=186, BW=23.3MiB/s (24.4MB/s)(234MiB/10046msec) 00:35:55.372 slat (nsec): min=7987, max=44492, avg=15765.85, stdev=3755.59 00:35:55.372 clat (usec): min=11528, max=58121, avg=16057.70, stdev=2639.45 00:35:55.372 lat (usec): min=11541, max=58162, avg=16073.46, stdev=2639.70 00:35:55.372 clat percentiles (usec): 00:35:55.372 | 1.00th=[12649], 5.00th=[13566], 10.00th=[13960], 20.00th=[14353], 00:35:55.372 | 30.00th=[14746], 40.00th=[15270], 50.00th=[15795], 60.00th=[16450], 00:35:55.372 | 70.00th=[16909], 80.00th=[17433], 90.00th=[18220], 95.00th=[18744], 00:35:55.372 | 99.00th=[20317], 99.50th=[20841], 99.90th=[57934], 99.95th=[57934], 00:35:55.372 | 99.99th=[57934] 00:35:55.372 bw ( KiB/s): min=19968, max=26368, per=32.56%, avg=23936.00, stdev=2001.15, samples=20 00:35:55.372 iops : min= 156, max= 206, avg=187.00, stdev=15.63, samples=20 00:35:55.372 lat (msec) : 20=98.61%, 50=1.18%, 100=0.21% 00:35:55.372 cpu : usr=95.16%, sys=4.35%, ctx=17, majf=0, minf=120 00:35:55.372 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:55.372 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.372 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:55.372 issued rwts: total=1872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:55.372 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:55.372 00:35:55.372 Run status group 0 (all jobs): 00:35:55.372 READ: bw=71.8MiB/s (75.3MB/s), 23.3MiB/s-24.4MiB/s (24.4MB/s-25.6MB/s), io=721MiB (756MB), run=10046-10048msec 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:35:55.372 00:35:55.372 real 0m11.145s 00:35:55.372 user 0m29.960s 00:35:55.372 sys 0m1.559s 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:55.372 06:46:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:35:55.372 ************************************ 00:35:55.372 END TEST fio_dif_digest 00:35:55.372 ************************************ 00:35:55.372 06:46:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:35:55.372 06:46:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:55.372 rmmod nvme_tcp 00:35:55.372 rmmod nvme_fabrics 00:35:55.372 rmmod nvme_keyring 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 2267821 ']' 00:35:55.372 06:46:25 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 2267821 00:35:55.372 06:46:25 nvmf_dif -- common/autotest_common.sh@952 -- # '[' -z 2267821 ']' 00:35:55.372 06:46:25 nvmf_dif -- common/autotest_common.sh@956 -- # kill -0 2267821 00:35:55.372 06:46:25 nvmf_dif -- common/autotest_common.sh@957 -- # uname 00:35:55.372 06:46:25 nvmf_dif -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:35:55.372 06:46:25 nvmf_dif -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2267821 00:35:55.372 06:46:25 nvmf_dif -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:35:55.372 06:46:25 nvmf_dif -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:35:55.372 06:46:25 nvmf_dif -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2267821' 00:35:55.372 killing process with pid 2267821 00:35:55.372 06:46:25 nvmf_dif -- common/autotest_common.sh@971 -- # kill 2267821 00:35:55.372 06:46:25 nvmf_dif -- common/autotest_common.sh@976 -- # wait 2267821 00:35:55.372 06:46:26 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:35:55.372 06:46:26 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:55.630 Waiting for block devices as requested 00:35:55.630 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:55.630 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:55.888 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:55.888 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:55.888 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:55.888 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:56.145 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:56.145 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:56.145 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:35:56.403 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:35:56.403 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:35:56.403 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:35:56.660 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:35:56.660 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:35:56.660 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:35:56.660 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:35:56.918 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:35:56.918 06:46:28 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:56.918 06:46:28 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:56.918 06:46:28 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:35:56.918 06:46:28 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:35:56.918 06:46:28 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:56.918 06:46:28 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:35:56.918 06:46:28 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:56.918 06:46:28 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:56.918 06:46:28 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:56.918 06:46:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:56.918 06:46:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.452 06:46:30 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:59.452 00:35:59.452 real 1m7.776s 00:35:59.452 user 6m31.772s 00:35:59.452 sys 0m17.322s 00:35:59.452 06:46:30 nvmf_dif -- common/autotest_common.sh@1128 -- # xtrace_disable 00:35:59.452 06:46:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 ************************************ 00:35:59.452 END TEST nvmf_dif 00:35:59.452 ************************************ 00:35:59.452 06:46:30 -- spdk/autotest.sh@286 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:59.452 06:46:30 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:35:59.452 06:46:30 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:35:59.452 06:46:30 -- common/autotest_common.sh@10 -- # set +x 00:35:59.452 ************************************ 00:35:59.452 START TEST nvmf_abort_qd_sizes 00:35:59.452 ************************************ 00:35:59.452 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:35:59.452 * Looking for test storage... 00:35:59.452 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:59.452 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:35:59.452 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lcov --version 00:35:59.452 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:35:59.452 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:35:59.452 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:59.452 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:35:59.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.453 --rc genhtml_branch_coverage=1 00:35:59.453 --rc genhtml_function_coverage=1 00:35:59.453 --rc genhtml_legend=1 00:35:59.453 --rc geninfo_all_blocks=1 00:35:59.453 --rc geninfo_unexecuted_blocks=1 00:35:59.453 00:35:59.453 ' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:35:59.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.453 --rc genhtml_branch_coverage=1 00:35:59.453 --rc genhtml_function_coverage=1 00:35:59.453 --rc genhtml_legend=1 00:35:59.453 --rc geninfo_all_blocks=1 00:35:59.453 --rc geninfo_unexecuted_blocks=1 00:35:59.453 00:35:59.453 ' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:35:59.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.453 --rc genhtml_branch_coverage=1 00:35:59.453 --rc genhtml_function_coverage=1 00:35:59.453 --rc genhtml_legend=1 00:35:59.453 --rc geninfo_all_blocks=1 00:35:59.453 --rc geninfo_unexecuted_blocks=1 00:35:59.453 00:35:59.453 ' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:35:59.453 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:59.453 --rc genhtml_branch_coverage=1 00:35:59.453 --rc genhtml_function_coverage=1 00:35:59.453 --rc genhtml_legend=1 00:35:59.453 --rc geninfo_all_blocks=1 00:35:59.453 --rc geninfo_unexecuted_blocks=1 00:35:59.453 00:35:59.453 ' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:59.453 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:35:59.453 06:46:30 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.0 (0x8086 - 0x159b)' 00:36:01.356 Found 0000:09:00.0 (0x8086 - 0x159b) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:09:00.1 (0x8086 - 0x159b)' 00:36:01.356 Found 0000:09:00.1 (0x8086 - 0x159b) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:01.356 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.0: cvl_0_0' 00:36:01.357 Found net devices under 0000:09:00.0: cvl_0_0 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:09:00.1: cvl_0_1' 00:36:01.357 Found net devices under 0000:09:00.1: cvl_0_1 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:01.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:01.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.328 ms 00:36:01.357 00:36:01.357 --- 10.0.0.2 ping statistics --- 00:36:01.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.357 rtt min/avg/max/mdev = 0.328/0.328/0.328/0.000 ms 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:01.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:01.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:36:01.357 00:36:01.357 --- 10.0.0.1 ping statistics --- 00:36:01.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:01.357 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:36:01.357 06:46:33 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:02.730 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:02.730 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:02.730 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:02.730 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:02.730 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:02.730 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:02.730 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:02.730 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:02.730 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:02.730 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:02.730 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:02.730 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:02.730 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:02.730 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:02.730 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:02.730 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:03.666 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=2278797 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 2278797 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # '[' -z 2278797 ']' 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:03.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:03.923 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:03.923 [2024-11-20 06:46:35.693266] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:36:03.923 [2024-11-20 06:46:35.693353] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:04.181 [2024-11-20 06:46:35.764639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:04.181 [2024-11-20 06:46:35.829734] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:04.181 [2024-11-20 06:46:35.829787] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:04.181 [2024-11-20 06:46:35.829815] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:04.181 [2024-11-20 06:46:35.829827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:04.181 [2024-11-20 06:46:35.829837] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:04.181 [2024-11-20 06:46:35.831407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:04.181 [2024-11-20 06:46:35.831433] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:04.181 [2024-11-20 06:46:35.831493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:04.181 [2024-11-20 06:46:35.831497] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@866 -- # return 0 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:0b:00.0 ]] 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:0b:00.0 ]] 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:0b:00.0 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:0b:00.0 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:04.181 06:46:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:04.181 ************************************ 00:36:04.181 START TEST spdk_target_abort 00:36:04.181 ************************************ 00:36:04.181 06:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1127 -- # spdk_target 00:36:04.181 06:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:04.181 06:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:0b:00.0 -b spdk_target 00:36:04.181 06:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:04.181 06:46:35 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.460 spdk_targetn1 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.460 [2024-11-20 06:46:38.836864] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:07.460 [2024-11-20 06:46:38.881169] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:07.460 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:07.461 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:07.461 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:07.461 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:07.461 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:07.461 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:07.461 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:07.461 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:07.461 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:07.461 06:46:38 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:10.735 Initializing NVMe Controllers 00:36:10.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:10.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:10.735 Initialization complete. Launching workers. 00:36:10.735 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 12454, failed: 0 00:36:10.735 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1204, failed to submit 11250 00:36:10.735 success 737, unsuccessful 467, failed 0 00:36:10.735 06:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:10.735 06:46:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:14.128 Initializing NVMe Controllers 00:36:14.128 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:14.128 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:14.128 Initialization complete. Launching workers. 00:36:14.128 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8806, failed: 0 00:36:14.128 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1228, failed to submit 7578 00:36:14.128 success 318, unsuccessful 910, failed 0 00:36:14.128 06:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:14.128 06:46:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:17.408 Initializing NVMe Controllers 00:36:17.408 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:17.408 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:17.408 Initialization complete. Launching workers. 00:36:17.408 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 30208, failed: 0 00:36:17.408 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2691, failed to submit 27517 00:36:17.408 success 513, unsuccessful 2178, failed 0 00:36:17.408 06:46:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:17.408 06:46:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.408 06:46:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:17.408 06:46:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:17.408 06:46:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:17.408 06:46:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:17.408 06:46:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2278797 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # '[' -z 2278797 ']' 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # kill -0 2278797 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # uname 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2278797 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2278797' 00:36:18.339 killing process with pid 2278797 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@971 -- # kill 2278797 00:36:18.339 06:46:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@976 -- # wait 2278797 00:36:18.597 00:36:18.597 real 0m14.206s 00:36:18.597 user 0m53.937s 00:36:18.597 sys 0m2.560s 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:18.597 ************************************ 00:36:18.597 END TEST spdk_target_abort 00:36:18.597 ************************************ 00:36:18.597 06:46:50 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:18.597 06:46:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:18.597 06:46:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:18.597 06:46:50 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:18.597 ************************************ 00:36:18.597 START TEST kernel_target_abort 00:36:18.597 ************************************ 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1127 -- # kernel_target 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:18.597 06:46:50 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:19.531 Waiting for block devices as requested 00:36:19.788 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:19.788 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:19.788 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:20.047 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:20.047 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:20.047 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:20.047 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:20.305 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:20.305 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:20.563 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:20.563 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:20.563 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:20.563 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:20.820 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:20.820 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:20.820 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:20.820 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:21.078 No valid GPT data, bailing 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:21.078 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:36:21.079 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:36:21.079 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:36:21.079 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:36:21.079 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:36:21.079 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:36:21.079 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:36:21.079 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:21.079 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a --hostid=29f67375-a902-e411-ace9-001e67bc3c9a -a 10.0.0.1 -t tcp -s 4420 00:36:21.336 00:36:21.336 Discovery Log Number of Records 2, Generation counter 2 00:36:21.336 =====Discovery Log Entry 0====== 00:36:21.336 trtype: tcp 00:36:21.336 adrfam: ipv4 00:36:21.336 subtype: current discovery subsystem 00:36:21.336 treq: not specified, sq flow control disable supported 00:36:21.336 portid: 1 00:36:21.336 trsvcid: 4420 00:36:21.336 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:21.336 traddr: 10.0.0.1 00:36:21.336 eflags: none 00:36:21.336 sectype: none 00:36:21.336 =====Discovery Log Entry 1====== 00:36:21.336 trtype: tcp 00:36:21.336 adrfam: ipv4 00:36:21.336 subtype: nvme subsystem 00:36:21.336 treq: not specified, sq flow control disable supported 00:36:21.336 portid: 1 00:36:21.336 trsvcid: 4420 00:36:21.336 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:21.336 traddr: 10.0.0.1 00:36:21.336 eflags: none 00:36:21.336 sectype: none 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:21.336 06:46:52 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:24.615 Initializing NVMe Controllers 00:36:24.615 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:24.615 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:24.615 Initialization complete. Launching workers. 00:36:24.615 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 48159, failed: 0 00:36:24.615 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 48159, failed to submit 0 00:36:24.615 success 0, unsuccessful 48159, failed 0 00:36:24.615 06:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:24.615 06:46:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.894 Initializing NVMe Controllers 00:36:27.894 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:27.894 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:27.894 Initialization complete. Launching workers. 00:36:27.894 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94729, failed: 0 00:36:27.894 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21510, failed to submit 73219 00:36:27.894 success 0, unsuccessful 21510, failed 0 00:36:27.894 06:46:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:27.894 06:46:59 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:31.171 Initializing NVMe Controllers 00:36:31.172 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:31.172 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:31.172 Initialization complete. Launching workers. 00:36:31.172 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 87754, failed: 0 00:36:31.172 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 21934, failed to submit 65820 00:36:31.172 success 0, unsuccessful 21934, failed 0 00:36:31.172 06:47:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:31.172 06:47:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:31.172 06:47:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:36:31.172 06:47:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:31.172 06:47:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:31.172 06:47:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:31.172 06:47:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:31.172 06:47:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:36:31.172 06:47:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:36:31.172 06:47:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:31.737 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:31.737 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:31.737 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:31.737 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:31.737 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:31.737 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:31.996 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:31.996 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:31.996 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:31.996 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:31.996 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:31.996 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:31.996 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:31.996 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:31.996 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:31.996 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:32.933 0000:0b:00.0 (8086 0a54): nvme -> vfio-pci 00:36:32.933 00:36:32.933 real 0m14.518s 00:36:32.933 user 0m6.165s 00:36:32.933 sys 0m3.456s 00:36:33.191 06:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:33.191 06:47:04 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:33.191 ************************************ 00:36:33.191 END TEST kernel_target_abort 00:36:33.191 ************************************ 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:33.191 rmmod nvme_tcp 00:36:33.191 rmmod nvme_fabrics 00:36:33.191 rmmod nvme_keyring 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 2278797 ']' 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 2278797 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # '[' -z 2278797 ']' 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@956 -- # kill -0 2278797 00:36:33.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 956: kill: (2278797) - No such process 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- common/autotest_common.sh@979 -- # echo 'Process with pid 2278797 is not found' 00:36:33.191 Process with pid 2278797 is not found 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:36:33.191 06:47:04 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:34.124 Waiting for block devices as requested 00:36:34.124 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:34.382 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:34.382 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:34.382 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:34.641 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:34.641 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:34.641 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:34.641 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:34.899 0000:0b:00.0 (8086 0a54): vfio-pci -> nvme 00:36:34.899 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:35.157 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:35.157 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:35.157 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:35.157 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:35.415 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:35.415 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:35.415 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:35.733 06:47:07 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:37.636 06:47:09 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:37.636 00:36:37.636 real 0m38.532s 00:36:37.636 user 1m2.426s 00:36:37.636 sys 0m9.537s 00:36:37.636 06:47:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:37.636 06:47:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:37.636 ************************************ 00:36:37.636 END TEST nvmf_abort_qd_sizes 00:36:37.636 ************************************ 00:36:37.636 06:47:09 -- spdk/autotest.sh@288 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:37.636 06:47:09 -- common/autotest_common.sh@1103 -- # '[' 2 -le 1 ']' 00:36:37.636 06:47:09 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:37.636 06:47:09 -- common/autotest_common.sh@10 -- # set +x 00:36:37.636 ************************************ 00:36:37.636 START TEST keyring_file 00:36:37.636 ************************************ 00:36:37.636 06:47:09 keyring_file -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:37.636 * Looking for test storage... 00:36:37.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:37.636 06:47:09 keyring_file -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:37.636 06:47:09 keyring_file -- common/autotest_common.sh@1691 -- # lcov --version 00:36:37.636 06:47:09 keyring_file -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:37.894 06:47:09 keyring_file -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:37.894 06:47:09 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:37.894 06:47:09 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:37.894 06:47:09 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:37.894 06:47:09 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:36:37.894 06:47:09 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:36:37.894 06:47:09 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@345 -- # : 1 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@353 -- # local d=1 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@355 -- # echo 1 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@353 -- # local d=2 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@355 -- # echo 2 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@368 -- # return 0 00:36:37.895 06:47:09 keyring_file -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:37.895 06:47:09 keyring_file -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:37.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.895 --rc genhtml_branch_coverage=1 00:36:37.895 --rc genhtml_function_coverage=1 00:36:37.895 --rc genhtml_legend=1 00:36:37.895 --rc geninfo_all_blocks=1 00:36:37.895 --rc geninfo_unexecuted_blocks=1 00:36:37.895 00:36:37.895 ' 00:36:37.895 06:47:09 keyring_file -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:37.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.895 --rc genhtml_branch_coverage=1 00:36:37.895 --rc genhtml_function_coverage=1 00:36:37.895 --rc genhtml_legend=1 00:36:37.895 --rc geninfo_all_blocks=1 00:36:37.895 --rc geninfo_unexecuted_blocks=1 00:36:37.895 00:36:37.895 ' 00:36:37.895 06:47:09 keyring_file -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:37.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.895 --rc genhtml_branch_coverage=1 00:36:37.895 --rc genhtml_function_coverage=1 00:36:37.895 --rc genhtml_legend=1 00:36:37.895 --rc geninfo_all_blocks=1 00:36:37.895 --rc geninfo_unexecuted_blocks=1 00:36:37.895 00:36:37.895 ' 00:36:37.895 06:47:09 keyring_file -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:37.895 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:37.895 --rc genhtml_branch_coverage=1 00:36:37.895 --rc genhtml_function_coverage=1 00:36:37.895 --rc genhtml_legend=1 00:36:37.895 --rc geninfo_all_blocks=1 00:36:37.895 --rc geninfo_unexecuted_blocks=1 00:36:37.895 00:36:37.895 ' 00:36:37.895 06:47:09 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:37.895 06:47:09 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:37.895 06:47:09 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:37.895 06:47:09 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.895 06:47:09 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.895 06:47:09 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.895 06:47:09 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:37.895 06:47:09 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@51 -- # : 0 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:37.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:37.895 06:47:09 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:37.895 06:47:09 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:37.895 06:47:09 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:37.895 06:47:09 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:37.895 06:47:09 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:37.895 06:47:09 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:37.895 06:47:09 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:37.895 06:47:09 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:37.895 06:47:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:37.895 06:47:09 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:37.895 06:47:09 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:37.895 06:47:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:37.895 06:47:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:37.895 06:47:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nIN183pDit 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nIN183pDit 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nIN183pDit 00:36:37.896 06:47:09 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nIN183pDit 00:36:37.896 06:47:09 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.IrseYky9eJ 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:37.896 06:47:09 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.IrseYky9eJ 00:36:37.896 06:47:09 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.IrseYky9eJ 00:36:37.896 06:47:09 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.IrseYky9eJ 00:36:37.896 06:47:09 keyring_file -- keyring/file.sh@30 -- # tgtpid=2284571 00:36:37.896 06:47:09 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:37.896 06:47:09 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2284571 00:36:37.896 06:47:09 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2284571 ']' 00:36:37.896 06:47:09 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:37.896 06:47:09 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:37.896 06:47:09 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:37.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:37.896 06:47:09 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:37.896 06:47:09 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:37.896 [2024-11-20 06:47:09.679789] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:36:37.896 [2024-11-20 06:47:09.679876] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284571 ] 00:36:38.154 [2024-11-20 06:47:09.743788] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.154 [2024-11-20 06:47:09.799157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:36:38.412 06:47:10 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:38.412 [2024-11-20 06:47:10.064756] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:38.412 null0 00:36:38.412 [2024-11-20 06:47:10.096774] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:38.412 [2024-11-20 06:47:10.097327] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:38.412 06:47:10 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:38.412 [2024-11-20 06:47:10.120827] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:38.412 request: 00:36:38.412 { 00:36:38.412 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:38.412 "secure_channel": false, 00:36:38.412 "listen_address": { 00:36:38.412 "trtype": "tcp", 00:36:38.412 "traddr": "127.0.0.1", 00:36:38.412 "trsvcid": "4420" 00:36:38.412 }, 00:36:38.412 "method": "nvmf_subsystem_add_listener", 00:36:38.412 "req_id": 1 00:36:38.412 } 00:36:38.412 Got JSON-RPC error response 00:36:38.412 response: 00:36:38.412 { 00:36:38.412 "code": -32602, 00:36:38.412 "message": "Invalid parameters" 00:36:38.412 } 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:38.412 06:47:10 keyring_file -- keyring/file.sh@47 -- # bperfpid=2284586 00:36:38.412 06:47:10 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2284586 /var/tmp/bperf.sock 00:36:38.412 06:47:10 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2284586 ']' 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:38.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:38.412 06:47:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:38.412 [2024-11-20 06:47:10.172402] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:36:38.412 [2024-11-20 06:47:10.172483] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284586 ] 00:36:38.412 [2024-11-20 06:47:10.241397] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:38.670 [2024-11-20 06:47:10.303028] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:38.670 06:47:10 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:38.670 06:47:10 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:36:38.670 06:47:10 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nIN183pDit 00:36:38.670 06:47:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nIN183pDit 00:36:38.928 06:47:10 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IrseYky9eJ 00:36:38.928 06:47:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IrseYky9eJ 00:36:39.185 06:47:10 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:36:39.185 06:47:10 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:39.185 06:47:10 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:39.185 06:47:10 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:39.185 06:47:10 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.443 06:47:11 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.nIN183pDit == \/\t\m\p\/\t\m\p\.\n\I\N\1\8\3\p\D\i\t ]] 00:36:39.443 06:47:11 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:36:39.443 06:47:11 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:36:39.443 06:47:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:39.443 06:47:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.443 06:47:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:39.700 06:47:11 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.IrseYky9eJ == \/\t\m\p\/\t\m\p\.\I\r\s\e\Y\k\y\9\e\J ]] 00:36:39.700 06:47:11 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:36:39.700 06:47:11 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:39.700 06:47:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:39.700 06:47:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:39.700 06:47:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:39.700 06:47:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:40.266 06:47:11 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:40.266 06:47:11 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:36:40.266 06:47:11 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:40.266 06:47:11 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.266 06:47:11 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.266 06:47:11 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.266 06:47:11 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:40.266 06:47:12 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:36:40.266 06:47:12 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.266 06:47:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:40.524 [2024-11-20 06:47:12.318504] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:40.781 nvme0n1 00:36:40.781 06:47:12 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:36:40.781 06:47:12 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:40.781 06:47:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:40.781 06:47:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:40.781 06:47:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:40.781 06:47:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:41.038 06:47:12 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:36:41.038 06:47:12 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:36:41.038 06:47:12 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:41.038 06:47:12 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:41.038 06:47:12 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:41.038 06:47:12 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:41.038 06:47:12 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:41.296 06:47:12 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:36:41.296 06:47:12 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:41.296 Running I/O for 1 seconds... 00:36:42.487 10486.00 IOPS, 40.96 MiB/s 00:36:42.487 Latency(us) 00:36:42.487 [2024-11-20T05:47:14.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:42.487 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:36:42.487 nvme0n1 : 1.01 10531.77 41.14 0.00 0.00 12111.89 4417.61 18544.26 00:36:42.487 [2024-11-20T05:47:14.323Z] =================================================================================================================== 00:36:42.487 [2024-11-20T05:47:14.323Z] Total : 10531.77 41.14 0.00 0.00 12111.89 4417.61 18544.26 00:36:42.487 { 00:36:42.487 "results": [ 00:36:42.487 { 00:36:42.487 "job": "nvme0n1", 00:36:42.487 "core_mask": "0x2", 00:36:42.487 "workload": "randrw", 00:36:42.487 "percentage": 50, 00:36:42.487 "status": "finished", 00:36:42.487 "queue_depth": 128, 00:36:42.487 "io_size": 4096, 00:36:42.487 "runtime": 1.007903, 00:36:42.487 "iops": 10531.767441906612, 00:36:42.487 "mibps": 41.1397165699477, 00:36:42.487 "io_failed": 0, 00:36:42.487 "io_timeout": 0, 00:36:42.487 "avg_latency_us": 12111.886505818113, 00:36:42.487 "min_latency_us": 4417.6118518518515, 00:36:42.487 "max_latency_us": 18544.26074074074 00:36:42.487 } 00:36:42.487 ], 00:36:42.487 "core_count": 1 00:36:42.487 } 00:36:42.487 06:47:14 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:42.487 06:47:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:42.745 06:47:14 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:36:42.745 06:47:14 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:42.745 06:47:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:42.745 06:47:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:42.745 06:47:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:42.745 06:47:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.003 06:47:14 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:36:43.003 06:47:14 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:36:43.003 06:47:14 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:43.003 06:47:14 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.003 06:47:14 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.003 06:47:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.003 06:47:14 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:43.260 06:47:14 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:36:43.260 06:47:14 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:43.260 06:47:14 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:43.260 06:47:14 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:43.260 06:47:14 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:43.260 06:47:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:43.260 06:47:14 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:43.260 06:47:14 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:43.260 06:47:14 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:43.260 06:47:14 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:36:43.517 [2024-11-20 06:47:15.155462] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:43.517 [2024-11-20 06:47:15.155917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbc510 (107): Transport endpoint is not connected 00:36:43.517 [2024-11-20 06:47:15.156902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdbc510 (9): Bad file descriptor 00:36:43.517 [2024-11-20 06:47:15.157901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:43.517 [2024-11-20 06:47:15.157925] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:43.517 [2024-11-20 06:47:15.157945] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:43.517 [2024-11-20 06:47:15.157968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:43.517 request: 00:36:43.517 { 00:36:43.517 "name": "nvme0", 00:36:43.517 "trtype": "tcp", 00:36:43.517 "traddr": "127.0.0.1", 00:36:43.517 "adrfam": "ipv4", 00:36:43.517 "trsvcid": "4420", 00:36:43.517 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:43.517 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:43.517 "prchk_reftag": false, 00:36:43.517 "prchk_guard": false, 00:36:43.517 "hdgst": false, 00:36:43.517 "ddgst": false, 00:36:43.517 "psk": "key1", 00:36:43.517 "allow_unrecognized_csi": false, 00:36:43.517 "method": "bdev_nvme_attach_controller", 00:36:43.517 "req_id": 1 00:36:43.518 } 00:36:43.518 Got JSON-RPC error response 00:36:43.518 response: 00:36:43.518 { 00:36:43.518 "code": -5, 00:36:43.518 "message": "Input/output error" 00:36:43.518 } 00:36:43.518 06:47:15 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:43.518 06:47:15 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:43.518 06:47:15 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:43.518 06:47:15 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:43.518 06:47:15 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:36:43.518 06:47:15 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:43.518 06:47:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.518 06:47:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.518 06:47:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.518 06:47:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:43.775 06:47:15 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:36:43.775 06:47:15 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:36:43.775 06:47:15 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:43.775 06:47:15 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:43.775 06:47:15 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:43.775 06:47:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:43.775 06:47:15 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:44.033 06:47:15 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:36:44.033 06:47:15 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:36:44.033 06:47:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:44.290 06:47:15 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:36:44.290 06:47:15 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:36:44.548 06:47:16 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:36:44.548 06:47:16 keyring_file -- keyring/file.sh@78 -- # jq length 00:36:44.548 06:47:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:44.805 06:47:16 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:36:44.805 06:47:16 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.nIN183pDit 00:36:44.805 06:47:16 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nIN183pDit 00:36:44.805 06:47:16 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:44.805 06:47:16 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nIN183pDit 00:36:44.805 06:47:16 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:44.805 06:47:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:44.805 06:47:16 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:44.805 06:47:16 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:44.805 06:47:16 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nIN183pDit 00:36:44.806 06:47:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nIN183pDit 00:36:45.064 [2024-11-20 06:47:16.776859] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nIN183pDit': 0100660 00:36:45.064 [2024-11-20 06:47:16.776895] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:36:45.064 request: 00:36:45.064 { 00:36:45.064 "name": "key0", 00:36:45.064 "path": "/tmp/tmp.nIN183pDit", 00:36:45.064 "method": "keyring_file_add_key", 00:36:45.064 "req_id": 1 00:36:45.064 } 00:36:45.064 Got JSON-RPC error response 00:36:45.064 response: 00:36:45.064 { 00:36:45.064 "code": -1, 00:36:45.064 "message": "Operation not permitted" 00:36:45.064 } 00:36:45.064 06:47:16 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:45.064 06:47:16 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:45.064 06:47:16 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:45.064 06:47:16 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:45.064 06:47:16 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.nIN183pDit 00:36:45.064 06:47:16 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nIN183pDit 00:36:45.064 06:47:16 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nIN183pDit 00:36:45.322 06:47:17 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.nIN183pDit 00:36:45.322 06:47:17 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:36:45.322 06:47:17 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:45.322 06:47:17 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:45.322 06:47:17 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:45.322 06:47:17 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:45.322 06:47:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:45.579 06:47:17 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:36:45.579 06:47:17 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.579 06:47:17 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:36:45.579 06:47:17 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.579 06:47:17 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:45.579 06:47:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:45.579 06:47:17 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:45.579 06:47:17 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:45.579 06:47:17 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.579 06:47:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:45.836 [2024-11-20 06:47:17.635178] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nIN183pDit': No such file or directory 00:36:45.836 [2024-11-20 06:47:17.635214] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:36:45.836 [2024-11-20 06:47:17.635245] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:36:45.836 [2024-11-20 06:47:17.635265] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:36:45.836 [2024-11-20 06:47:17.635299] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:45.836 [2024-11-20 06:47:17.635327] bdev_nvme.c:6669:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:36:45.836 request: 00:36:45.836 { 00:36:45.836 "name": "nvme0", 00:36:45.836 "trtype": "tcp", 00:36:45.836 "traddr": "127.0.0.1", 00:36:45.836 "adrfam": "ipv4", 00:36:45.836 "trsvcid": "4420", 00:36:45.836 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:45.836 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:45.836 "prchk_reftag": false, 00:36:45.836 "prchk_guard": false, 00:36:45.836 "hdgst": false, 00:36:45.836 "ddgst": false, 00:36:45.836 "psk": "key0", 00:36:45.836 "allow_unrecognized_csi": false, 00:36:45.836 "method": "bdev_nvme_attach_controller", 00:36:45.836 "req_id": 1 00:36:45.836 } 00:36:45.836 Got JSON-RPC error response 00:36:45.836 response: 00:36:45.836 { 00:36:45.836 "code": -19, 00:36:45.837 "message": "No such device" 00:36:45.837 } 00:36:45.837 06:47:17 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:36:45.837 06:47:17 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:45.837 06:47:17 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:45.837 06:47:17 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:45.837 06:47:17 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:36:45.837 06:47:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:46.094 06:47:17 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:46.094 06:47:17 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:46.094 06:47:17 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:46.094 06:47:17 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:46.094 06:47:17 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:46.094 06:47:17 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:46.094 06:47:17 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.Xr7Nrw9QL2 00:36:46.094 06:47:17 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:46.094 06:47:17 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:46.094 06:47:17 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:36:46.094 06:47:17 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:46.094 06:47:17 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:46.094 06:47:17 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:36:46.094 06:47:17 keyring_file -- nvmf/common.sh@733 -- # python - 00:36:46.352 06:47:17 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.Xr7Nrw9QL2 00:36:46.352 06:47:17 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.Xr7Nrw9QL2 00:36:46.352 06:47:17 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.Xr7Nrw9QL2 00:36:46.352 06:47:17 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Xr7Nrw9QL2 00:36:46.352 06:47:17 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Xr7Nrw9QL2 00:36:46.610 06:47:18 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.610 06:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:46.867 nvme0n1 00:36:46.867 06:47:18 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:36:46.867 06:47:18 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:46.867 06:47:18 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:46.867 06:47:18 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:46.867 06:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:46.867 06:47:18 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:47.124 06:47:18 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:36:47.124 06:47:18 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:36:47.124 06:47:18 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:36:47.381 06:47:19 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:36:47.381 06:47:19 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:36:47.381 06:47:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.381 06:47:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.381 06:47:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:47.638 06:47:19 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:36:47.638 06:47:19 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:36:47.638 06:47:19 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:47.638 06:47:19 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:47.638 06:47:19 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:47.638 06:47:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:47.638 06:47:19 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:47.897 06:47:19 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:36:47.897 06:47:19 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:47.897 06:47:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:48.191 06:47:19 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:36:48.191 06:47:19 keyring_file -- keyring/file.sh@105 -- # jq length 00:36:48.191 06:47:19 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:48.476 06:47:20 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:36:48.476 06:47:20 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.Xr7Nrw9QL2 00:36:48.476 06:47:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.Xr7Nrw9QL2 00:36:48.734 06:47:20 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.IrseYky9eJ 00:36:48.734 06:47:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.IrseYky9eJ 00:36:48.992 06:47:20 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:48.992 06:47:20 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:49.556 nvme0n1 00:36:49.556 06:47:21 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:36:49.556 06:47:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:36:49.814 06:47:21 keyring_file -- keyring/file.sh@113 -- # config='{ 00:36:49.814 "subsystems": [ 00:36:49.814 { 00:36:49.814 "subsystem": "keyring", 00:36:49.814 "config": [ 00:36:49.814 { 00:36:49.814 "method": "keyring_file_add_key", 00:36:49.814 "params": { 00:36:49.814 "name": "key0", 00:36:49.814 "path": "/tmp/tmp.Xr7Nrw9QL2" 00:36:49.814 } 00:36:49.814 }, 00:36:49.814 { 00:36:49.814 "method": "keyring_file_add_key", 00:36:49.814 "params": { 00:36:49.814 "name": "key1", 00:36:49.814 "path": "/tmp/tmp.IrseYky9eJ" 00:36:49.814 } 00:36:49.814 } 00:36:49.814 ] 00:36:49.814 }, 00:36:49.814 { 00:36:49.814 "subsystem": "iobuf", 00:36:49.814 "config": [ 00:36:49.814 { 00:36:49.814 "method": "iobuf_set_options", 00:36:49.814 "params": { 00:36:49.814 "small_pool_count": 8192, 00:36:49.814 "large_pool_count": 1024, 00:36:49.814 "small_bufsize": 8192, 00:36:49.814 "large_bufsize": 135168, 00:36:49.814 "enable_numa": false 00:36:49.814 } 00:36:49.814 } 00:36:49.814 ] 00:36:49.814 }, 00:36:49.814 { 00:36:49.814 "subsystem": "sock", 00:36:49.814 "config": [ 00:36:49.814 { 00:36:49.814 "method": "sock_set_default_impl", 00:36:49.814 "params": { 00:36:49.814 "impl_name": "posix" 00:36:49.814 } 00:36:49.814 }, 00:36:49.814 { 00:36:49.814 "method": "sock_impl_set_options", 00:36:49.814 "params": { 00:36:49.814 "impl_name": "ssl", 00:36:49.814 "recv_buf_size": 4096, 00:36:49.814 "send_buf_size": 4096, 00:36:49.814 "enable_recv_pipe": true, 00:36:49.814 "enable_quickack": false, 00:36:49.814 "enable_placement_id": 0, 00:36:49.814 "enable_zerocopy_send_server": true, 00:36:49.814 "enable_zerocopy_send_client": false, 00:36:49.814 "zerocopy_threshold": 0, 00:36:49.814 "tls_version": 0, 00:36:49.814 "enable_ktls": false 00:36:49.814 } 00:36:49.814 }, 00:36:49.814 { 00:36:49.814 "method": "sock_impl_set_options", 00:36:49.814 "params": { 00:36:49.814 "impl_name": "posix", 00:36:49.814 "recv_buf_size": 2097152, 00:36:49.814 "send_buf_size": 2097152, 00:36:49.814 "enable_recv_pipe": true, 00:36:49.814 "enable_quickack": false, 00:36:49.814 "enable_placement_id": 0, 00:36:49.814 "enable_zerocopy_send_server": true, 00:36:49.814 "enable_zerocopy_send_client": false, 00:36:49.814 "zerocopy_threshold": 0, 00:36:49.814 "tls_version": 0, 00:36:49.814 "enable_ktls": false 00:36:49.814 } 00:36:49.814 } 00:36:49.814 ] 00:36:49.814 }, 00:36:49.814 { 00:36:49.814 "subsystem": "vmd", 00:36:49.814 "config": [] 00:36:49.814 }, 00:36:49.814 { 00:36:49.814 "subsystem": "accel", 00:36:49.814 "config": [ 00:36:49.814 { 00:36:49.814 "method": "accel_set_options", 00:36:49.814 "params": { 00:36:49.814 "small_cache_size": 128, 00:36:49.814 "large_cache_size": 16, 00:36:49.814 "task_count": 2048, 00:36:49.814 "sequence_count": 2048, 00:36:49.814 "buf_count": 2048 00:36:49.814 } 00:36:49.814 } 00:36:49.814 ] 00:36:49.814 }, 00:36:49.814 { 00:36:49.814 "subsystem": "bdev", 00:36:49.814 "config": [ 00:36:49.814 { 00:36:49.814 "method": "bdev_set_options", 00:36:49.814 "params": { 00:36:49.814 "bdev_io_pool_size": 65535, 00:36:49.814 "bdev_io_cache_size": 256, 00:36:49.814 "bdev_auto_examine": true, 00:36:49.814 "iobuf_small_cache_size": 128, 00:36:49.814 "iobuf_large_cache_size": 16 00:36:49.814 } 00:36:49.814 }, 00:36:49.814 { 00:36:49.814 "method": "bdev_raid_set_options", 00:36:49.814 "params": { 00:36:49.814 "process_window_size_kb": 1024, 00:36:49.814 "process_max_bandwidth_mb_sec": 0 00:36:49.814 } 00:36:49.814 }, 00:36:49.815 { 00:36:49.815 "method": "bdev_iscsi_set_options", 00:36:49.815 "params": { 00:36:49.815 "timeout_sec": 30 00:36:49.815 } 00:36:49.815 }, 00:36:49.815 { 00:36:49.815 "method": "bdev_nvme_set_options", 00:36:49.815 "params": { 00:36:49.815 "action_on_timeout": "none", 00:36:49.815 "timeout_us": 0, 00:36:49.815 "timeout_admin_us": 0, 00:36:49.815 "keep_alive_timeout_ms": 10000, 00:36:49.815 "arbitration_burst": 0, 00:36:49.815 "low_priority_weight": 0, 00:36:49.815 "medium_priority_weight": 0, 00:36:49.815 "high_priority_weight": 0, 00:36:49.815 "nvme_adminq_poll_period_us": 10000, 00:36:49.815 "nvme_ioq_poll_period_us": 0, 00:36:49.815 "io_queue_requests": 512, 00:36:49.815 "delay_cmd_submit": true, 00:36:49.815 "transport_retry_count": 4, 00:36:49.815 "bdev_retry_count": 3, 00:36:49.815 "transport_ack_timeout": 0, 00:36:49.815 "ctrlr_loss_timeout_sec": 0, 00:36:49.815 "reconnect_delay_sec": 0, 00:36:49.815 "fast_io_fail_timeout_sec": 0, 00:36:49.815 "disable_auto_failback": false, 00:36:49.815 "generate_uuids": false, 00:36:49.815 "transport_tos": 0, 00:36:49.815 "nvme_error_stat": false, 00:36:49.815 "rdma_srq_size": 0, 00:36:49.815 "io_path_stat": false, 00:36:49.815 "allow_accel_sequence": false, 00:36:49.815 "rdma_max_cq_size": 0, 00:36:49.815 "rdma_cm_event_timeout_ms": 0, 00:36:49.815 "dhchap_digests": [ 00:36:49.815 "sha256", 00:36:49.815 "sha384", 00:36:49.815 "sha512" 00:36:49.815 ], 00:36:49.815 "dhchap_dhgroups": [ 00:36:49.815 "null", 00:36:49.815 "ffdhe2048", 00:36:49.815 "ffdhe3072", 00:36:49.815 "ffdhe4096", 00:36:49.815 "ffdhe6144", 00:36:49.815 "ffdhe8192" 00:36:49.815 ] 00:36:49.815 } 00:36:49.815 }, 00:36:49.815 { 00:36:49.815 "method": "bdev_nvme_attach_controller", 00:36:49.815 "params": { 00:36:49.815 "name": "nvme0", 00:36:49.815 "trtype": "TCP", 00:36:49.815 "adrfam": "IPv4", 00:36:49.815 "traddr": "127.0.0.1", 00:36:49.815 "trsvcid": "4420", 00:36:49.815 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:49.815 "prchk_reftag": false, 00:36:49.815 "prchk_guard": false, 00:36:49.815 "ctrlr_loss_timeout_sec": 0, 00:36:49.815 "reconnect_delay_sec": 0, 00:36:49.815 "fast_io_fail_timeout_sec": 0, 00:36:49.815 "psk": "key0", 00:36:49.815 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:49.815 "hdgst": false, 00:36:49.815 "ddgst": false, 00:36:49.815 "multipath": "multipath" 00:36:49.815 } 00:36:49.815 }, 00:36:49.815 { 00:36:49.815 "method": "bdev_nvme_set_hotplug", 00:36:49.815 "params": { 00:36:49.815 "period_us": 100000, 00:36:49.815 "enable": false 00:36:49.815 } 00:36:49.815 }, 00:36:49.815 { 00:36:49.815 "method": "bdev_wait_for_examine" 00:36:49.815 } 00:36:49.815 ] 00:36:49.815 }, 00:36:49.815 { 00:36:49.815 "subsystem": "nbd", 00:36:49.815 "config": [] 00:36:49.815 } 00:36:49.815 ] 00:36:49.815 }' 00:36:49.815 06:47:21 keyring_file -- keyring/file.sh@115 -- # killprocess 2284586 00:36:49.815 06:47:21 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2284586 ']' 00:36:49.815 06:47:21 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2284586 00:36:49.815 06:47:21 keyring_file -- common/autotest_common.sh@957 -- # uname 00:36:49.815 06:47:21 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:49.815 06:47:21 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2284586 00:36:49.815 06:47:21 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:49.815 06:47:21 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:49.815 06:47:21 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2284586' 00:36:49.815 killing process with pid 2284586 00:36:49.815 06:47:21 keyring_file -- common/autotest_common.sh@971 -- # kill 2284586 00:36:49.815 Received shutdown signal, test time was about 1.000000 seconds 00:36:49.815 00:36:49.815 Latency(us) 00:36:49.815 [2024-11-20T05:47:21.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:49.815 [2024-11-20T05:47:21.651Z] =================================================================================================================== 00:36:49.815 [2024-11-20T05:47:21.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:49.815 06:47:21 keyring_file -- common/autotest_common.sh@976 -- # wait 2284586 00:36:50.074 06:47:21 keyring_file -- keyring/file.sh@118 -- # bperfpid=2286062 00:36:50.074 06:47:21 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2286062 /var/tmp/bperf.sock 00:36:50.074 06:47:21 keyring_file -- common/autotest_common.sh@833 -- # '[' -z 2286062 ']' 00:36:50.074 06:47:21 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:36:50.074 06:47:21 keyring_file -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:50.074 06:47:21 keyring_file -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:50.074 06:47:21 keyring_file -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:50.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:50.074 06:47:21 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:36:50.074 "subsystems": [ 00:36:50.074 { 00:36:50.074 "subsystem": "keyring", 00:36:50.074 "config": [ 00:36:50.074 { 00:36:50.074 "method": "keyring_file_add_key", 00:36:50.074 "params": { 00:36:50.074 "name": "key0", 00:36:50.074 "path": "/tmp/tmp.Xr7Nrw9QL2" 00:36:50.074 } 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "method": "keyring_file_add_key", 00:36:50.074 "params": { 00:36:50.074 "name": "key1", 00:36:50.074 "path": "/tmp/tmp.IrseYky9eJ" 00:36:50.074 } 00:36:50.074 } 00:36:50.074 ] 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "subsystem": "iobuf", 00:36:50.074 "config": [ 00:36:50.074 { 00:36:50.074 "method": "iobuf_set_options", 00:36:50.074 "params": { 00:36:50.074 "small_pool_count": 8192, 00:36:50.074 "large_pool_count": 1024, 00:36:50.074 "small_bufsize": 8192, 00:36:50.074 "large_bufsize": 135168, 00:36:50.074 "enable_numa": false 00:36:50.074 } 00:36:50.074 } 00:36:50.074 ] 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "subsystem": "sock", 00:36:50.074 "config": [ 00:36:50.074 { 00:36:50.074 "method": "sock_set_default_impl", 00:36:50.074 "params": { 00:36:50.074 "impl_name": "posix" 00:36:50.074 } 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "method": "sock_impl_set_options", 00:36:50.074 "params": { 00:36:50.074 "impl_name": "ssl", 00:36:50.074 "recv_buf_size": 4096, 00:36:50.074 "send_buf_size": 4096, 00:36:50.074 "enable_recv_pipe": true, 00:36:50.074 "enable_quickack": false, 00:36:50.074 "enable_placement_id": 0, 00:36:50.074 "enable_zerocopy_send_server": true, 00:36:50.074 "enable_zerocopy_send_client": false, 00:36:50.074 "zerocopy_threshold": 0, 00:36:50.074 "tls_version": 0, 00:36:50.074 "enable_ktls": false 00:36:50.074 } 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "method": "sock_impl_set_options", 00:36:50.074 "params": { 00:36:50.074 "impl_name": "posix", 00:36:50.074 "recv_buf_size": 2097152, 00:36:50.074 "send_buf_size": 2097152, 00:36:50.074 "enable_recv_pipe": true, 00:36:50.074 "enable_quickack": false, 00:36:50.074 "enable_placement_id": 0, 00:36:50.074 "enable_zerocopy_send_server": true, 00:36:50.074 "enable_zerocopy_send_client": false, 00:36:50.074 "zerocopy_threshold": 0, 00:36:50.074 "tls_version": 0, 00:36:50.074 "enable_ktls": false 00:36:50.074 } 00:36:50.074 } 00:36:50.074 ] 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "subsystem": "vmd", 00:36:50.074 "config": [] 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "subsystem": "accel", 00:36:50.074 "config": [ 00:36:50.074 { 00:36:50.074 "method": "accel_set_options", 00:36:50.074 "params": { 00:36:50.074 "small_cache_size": 128, 00:36:50.074 "large_cache_size": 16, 00:36:50.074 "task_count": 2048, 00:36:50.074 "sequence_count": 2048, 00:36:50.074 "buf_count": 2048 00:36:50.074 } 00:36:50.074 } 00:36:50.074 ] 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "subsystem": "bdev", 00:36:50.074 "config": [ 00:36:50.074 { 00:36:50.074 "method": "bdev_set_options", 00:36:50.074 "params": { 00:36:50.074 "bdev_io_pool_size": 65535, 00:36:50.074 "bdev_io_cache_size": 256, 00:36:50.074 "bdev_auto_examine": true, 00:36:50.074 "iobuf_small_cache_size": 128, 00:36:50.074 "iobuf_large_cache_size": 16 00:36:50.074 } 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "method": "bdev_raid_set_options", 00:36:50.074 "params": { 00:36:50.074 "process_window_size_kb": 1024, 00:36:50.074 "process_max_bandwidth_mb_sec": 0 00:36:50.074 } 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "method": "bdev_iscsi_set_options", 00:36:50.074 "params": { 00:36:50.074 "timeout_sec": 30 00:36:50.074 } 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "method": "bdev_nvme_set_options", 00:36:50.074 "params": { 00:36:50.074 "action_on_timeout": "none", 00:36:50.074 "timeout_us": 0, 00:36:50.074 "timeout_admin_us": 0, 00:36:50.074 "keep_alive_timeout_ms": 10000, 00:36:50.074 "arbitration_burst": 0, 00:36:50.074 "low_priority_weight": 0, 00:36:50.074 "medium_priority_weight": 0, 00:36:50.074 "high_priority_weight": 0, 00:36:50.074 "nvme_adminq_poll_period_us": 10000, 00:36:50.074 "nvme_ioq_poll_period_us": 0, 00:36:50.074 "io_queue_requests": 512, 00:36:50.074 "delay_cmd_submit": true, 00:36:50.074 "transport_retry_count": 4, 00:36:50.074 "bdev_retry_count": 3, 00:36:50.074 "transport_ack_timeout": 0, 00:36:50.074 "ctrlr_loss_timeout_sec": 0, 00:36:50.074 "reconnect_delay_sec": 0, 00:36:50.074 "fast_io_fail_timeout_sec": 0, 00:36:50.074 "disable_auto_failback": false, 00:36:50.074 "generate_uuids": false, 00:36:50.074 "transport_tos": 0, 00:36:50.074 "nvme_error_stat": false, 00:36:50.074 "rdma_srq_size": 0, 00:36:50.074 "io_path_stat": false, 00:36:50.074 "allow_accel_sequence": false, 00:36:50.074 "rdma_max_cq_size": 0, 00:36:50.074 "rdma_cm_event_timeout_ms": 0, 00:36:50.074 "dhchap_digests": [ 00:36:50.074 "sha256", 00:36:50.074 "sha384", 00:36:50.074 "sha512" 00:36:50.074 ], 00:36:50.074 "dhchap_dhgroups": [ 00:36:50.074 "null", 00:36:50.074 "ffdhe2048", 00:36:50.074 "ffdhe3072", 00:36:50.074 "ffdhe4096", 00:36:50.074 "ffdhe6144", 00:36:50.074 "ffdhe8192" 00:36:50.074 ] 00:36:50.074 } 00:36:50.074 }, 00:36:50.074 { 00:36:50.074 "method": "bdev_nvme_attach_controller", 00:36:50.074 "params": { 00:36:50.074 "name": "nvme0", 00:36:50.074 "trtype": "TCP", 00:36:50.074 "adrfam": "IPv4", 00:36:50.074 "traddr": "127.0.0.1", 00:36:50.074 "trsvcid": "4420", 00:36:50.074 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:50.074 "prchk_reftag": false, 00:36:50.074 "prchk_guard": false, 00:36:50.074 "ctrlr_loss_timeout_sec": 0, 00:36:50.075 "reconnect_delay_sec": 0, 00:36:50.075 "fast_io_fail_timeout_sec": 0, 00:36:50.075 "psk": "key0", 00:36:50.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:50.075 "hdgst": false, 00:36:50.075 "ddgst": false, 00:36:50.075 "multipath": "multipath" 00:36:50.075 } 00:36:50.075 }, 00:36:50.075 { 00:36:50.075 "method": "bdev_nvme_set_hotplug", 00:36:50.075 "params": { 00:36:50.075 "period_us": 100000, 00:36:50.075 "enable": false 00:36:50.075 } 00:36:50.075 }, 00:36:50.075 { 00:36:50.075 "method": "bdev_wait_for_examine" 00:36:50.075 } 00:36:50.075 ] 00:36:50.075 }, 00:36:50.075 { 00:36:50.075 "subsystem": "nbd", 00:36:50.075 "config": [] 00:36:50.075 } 00:36:50.075 ] 00:36:50.075 }' 00:36:50.075 06:47:21 keyring_file -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:50.075 06:47:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:50.075 [2024-11-20 06:47:21.734828] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:36:50.075 [2024-11-20 06:47:21.734911] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286062 ] 00:36:50.075 [2024-11-20 06:47:21.805354] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:50.075 [2024-11-20 06:47:21.864771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:50.332 [2024-11-20 06:47:22.046805] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:50.332 06:47:22 keyring_file -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:50.332 06:47:22 keyring_file -- common/autotest_common.sh@866 -- # return 0 00:36:50.332 06:47:22 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:36:50.332 06:47:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.332 06:47:22 keyring_file -- keyring/file.sh@121 -- # jq length 00:36:50.897 06:47:22 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:36:50.897 06:47:22 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:36:50.897 06:47:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:50.897 06:47:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.897 06:47:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.897 06:47:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.897 06:47:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:50.897 06:47:22 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:36:50.897 06:47:22 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:36:50.897 06:47:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:50.897 06:47:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:50.897 06:47:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:50.897 06:47:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:50.897 06:47:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:51.155 06:47:22 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:36:51.155 06:47:22 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:36:51.155 06:47:22 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:36:51.155 06:47:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:36:51.412 06:47:23 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:36:51.412 06:47:23 keyring_file -- keyring/file.sh@1 -- # cleanup 00:36:51.412 06:47:23 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.Xr7Nrw9QL2 /tmp/tmp.IrseYky9eJ 00:36:51.412 06:47:23 keyring_file -- keyring/file.sh@20 -- # killprocess 2286062 00:36:51.412 06:47:23 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2286062 ']' 00:36:51.412 06:47:23 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2286062 00:36:51.412 06:47:23 keyring_file -- common/autotest_common.sh@957 -- # uname 00:36:51.412 06:47:23 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:51.412 06:47:23 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2286062 00:36:51.670 06:47:23 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:51.670 06:47:23 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:51.670 06:47:23 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2286062' 00:36:51.670 killing process with pid 2286062 00:36:51.670 06:47:23 keyring_file -- common/autotest_common.sh@971 -- # kill 2286062 00:36:51.670 Received shutdown signal, test time was about 1.000000 seconds 00:36:51.670 00:36:51.670 Latency(us) 00:36:51.670 [2024-11-20T05:47:23.506Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:51.670 [2024-11-20T05:47:23.506Z] =================================================================================================================== 00:36:51.670 [2024-11-20T05:47:23.506Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:36:51.670 06:47:23 keyring_file -- common/autotest_common.sh@976 -- # wait 2286062 00:36:51.927 06:47:23 keyring_file -- keyring/file.sh@21 -- # killprocess 2284571 00:36:51.927 06:47:23 keyring_file -- common/autotest_common.sh@952 -- # '[' -z 2284571 ']' 00:36:51.927 06:47:23 keyring_file -- common/autotest_common.sh@956 -- # kill -0 2284571 00:36:51.927 06:47:23 keyring_file -- common/autotest_common.sh@957 -- # uname 00:36:51.927 06:47:23 keyring_file -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:51.927 06:47:23 keyring_file -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2284571 00:36:51.927 06:47:23 keyring_file -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:51.927 06:47:23 keyring_file -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:51.927 06:47:23 keyring_file -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2284571' 00:36:51.927 killing process with pid 2284571 00:36:51.927 06:47:23 keyring_file -- common/autotest_common.sh@971 -- # kill 2284571 00:36:51.927 06:47:23 keyring_file -- common/autotest_common.sh@976 -- # wait 2284571 00:36:52.184 00:36:52.184 real 0m14.599s 00:36:52.184 user 0m37.142s 00:36:52.184 sys 0m3.202s 00:36:52.184 06:47:23 keyring_file -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:52.184 06:47:23 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:52.184 ************************************ 00:36:52.184 END TEST keyring_file 00:36:52.184 ************************************ 00:36:52.184 06:47:24 -- spdk/autotest.sh@289 -- # [[ y == y ]] 00:36:52.184 06:47:24 -- spdk/autotest.sh@290 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:52.184 06:47:24 -- common/autotest_common.sh@1103 -- # '[' 3 -le 1 ']' 00:36:52.184 06:47:24 -- common/autotest_common.sh@1109 -- # xtrace_disable 00:36:52.184 06:47:24 -- common/autotest_common.sh@10 -- # set +x 00:36:52.443 ************************************ 00:36:52.443 START TEST keyring_linux 00:36:52.443 ************************************ 00:36:52.443 06:47:24 keyring_linux -- common/autotest_common.sh@1127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:36:52.443 Joined session keyring: 949640203 00:36:52.443 * Looking for test storage... 00:36:52.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:52.443 06:47:24 keyring_linux -- common/autotest_common.sh@1690 -- # [[ y == y ]] 00:36:52.443 06:47:24 keyring_linux -- common/autotest_common.sh@1691 -- # lcov --version 00:36:52.443 06:47:24 keyring_linux -- common/autotest_common.sh@1691 -- # awk '{print $NF}' 00:36:52.443 06:47:24 keyring_linux -- common/autotest_common.sh@1691 -- # lt 1.15 2 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@345 -- # : 1 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@368 -- # return 0 00:36:52.443 06:47:24 keyring_linux -- common/autotest_common.sh@1692 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:52.443 06:47:24 keyring_linux -- common/autotest_common.sh@1704 -- # export 'LCOV_OPTS= 00:36:52.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.443 --rc genhtml_branch_coverage=1 00:36:52.443 --rc genhtml_function_coverage=1 00:36:52.443 --rc genhtml_legend=1 00:36:52.443 --rc geninfo_all_blocks=1 00:36:52.443 --rc geninfo_unexecuted_blocks=1 00:36:52.443 00:36:52.443 ' 00:36:52.443 06:47:24 keyring_linux -- common/autotest_common.sh@1704 -- # LCOV_OPTS=' 00:36:52.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.443 --rc genhtml_branch_coverage=1 00:36:52.443 --rc genhtml_function_coverage=1 00:36:52.443 --rc genhtml_legend=1 00:36:52.443 --rc geninfo_all_blocks=1 00:36:52.443 --rc geninfo_unexecuted_blocks=1 00:36:52.443 00:36:52.443 ' 00:36:52.443 06:47:24 keyring_linux -- common/autotest_common.sh@1705 -- # export 'LCOV=lcov 00:36:52.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.443 --rc genhtml_branch_coverage=1 00:36:52.443 --rc genhtml_function_coverage=1 00:36:52.443 --rc genhtml_legend=1 00:36:52.443 --rc geninfo_all_blocks=1 00:36:52.443 --rc geninfo_unexecuted_blocks=1 00:36:52.443 00:36:52.443 ' 00:36:52.443 06:47:24 keyring_linux -- common/autotest_common.sh@1705 -- # LCOV='lcov 00:36:52.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:52.443 --rc genhtml_branch_coverage=1 00:36:52.443 --rc genhtml_function_coverage=1 00:36:52.443 --rc genhtml_legend=1 00:36:52.443 --rc geninfo_all_blocks=1 00:36:52.443 --rc geninfo_unexecuted_blocks=1 00:36:52.443 00:36:52.443 ' 00:36:52.443 06:47:24 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:52.443 06:47:24 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:29f67375-a902-e411-ace9-001e67bc3c9a 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=29f67375-a902-e411-ace9-001e67bc3c9a 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:52.443 06:47:24 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:52.443 06:47:24 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.443 06:47:24 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.443 06:47:24 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.443 06:47:24 keyring_linux -- paths/export.sh@5 -- # export PATH 00:36:52.443 06:47:24 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:52.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:52.443 06:47:24 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:52.443 06:47:24 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:52.443 06:47:24 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:52.443 06:47:24 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:52.443 06:47:24 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:36:52.443 06:47:24 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:36:52.444 06:47:24 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:36:52.444 06:47:24 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:36:52.444 /tmp/:spdk-test:key0 00:36:52.444 06:47:24 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:36:52.444 06:47:24 keyring_linux -- nvmf/common.sh@733 -- # python - 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:36:52.444 06:47:24 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:36:52.444 /tmp/:spdk-test:key1 00:36:52.444 06:47:24 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2286534 00:36:52.444 06:47:24 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:52.444 06:47:24 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2286534 00:36:52.444 06:47:24 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 2286534 ']' 00:36:52.444 06:47:24 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:52.444 06:47:24 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:52.444 06:47:24 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:52.444 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:52.444 06:47:24 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:52.444 06:47:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:52.701 [2024-11-20 06:47:24.314763] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:36:52.701 [2024-11-20 06:47:24.314849] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286534 ] 00:36:52.701 [2024-11-20 06:47:24.382067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:52.701 [2024-11-20 06:47:24.443908] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:36:52.959 06:47:24 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:52.959 [2024-11-20 06:47:24.731544] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:52.959 null0 00:36:52.959 [2024-11-20 06:47:24.763615] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:52.959 [2024-11-20 06:47:24.764135] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:36:52.959 06:47:24 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:36:52.959 1039882083 00:36:52.959 06:47:24 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:36:52.959 702013932 00:36:52.959 06:47:24 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2286546 00:36:52.959 06:47:24 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:36:52.959 06:47:24 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2286546 /var/tmp/bperf.sock 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@833 -- # '[' -z 2286546 ']' 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@837 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@838 -- # local max_retries=100 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@840 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:52.959 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@842 -- # xtrace_disable 00:36:52.959 06:47:24 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:53.217 [2024-11-20 06:47:24.830725] Starting SPDK v25.01-pre git sha1 ecdb65a23 / DPDK 24.03.0 initialization... 00:36:53.217 [2024-11-20 06:47:24.830802] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2286546 ] 00:36:53.217 [2024-11-20 06:47:24.894118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:53.217 [2024-11-20 06:47:24.951672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:53.474 06:47:25 keyring_linux -- common/autotest_common.sh@862 -- # (( i == 0 )) 00:36:53.474 06:47:25 keyring_linux -- common/autotest_common.sh@866 -- # return 0 00:36:53.474 06:47:25 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:36:53.474 06:47:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:36:53.732 06:47:25 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:36:53.732 06:47:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:36:53.990 06:47:25 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:53.990 06:47:25 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:36:54.248 [2024-11-20 06:47:25.957077] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:36:54.248 nvme0n1 00:36:54.248 06:47:26 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:36:54.248 06:47:26 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:36:54.248 06:47:26 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:54.248 06:47:26 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:54.248 06:47:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.248 06:47:26 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:54.505 06:47:26 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:36:54.505 06:47:26 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:54.505 06:47:26 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:36:54.505 06:47:26 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:36:54.505 06:47:26 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:54.505 06:47:26 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:54.505 06:47:26 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:36:55.070 06:47:26 keyring_linux -- keyring/linux.sh@25 -- # sn=1039882083 00:36:55.070 06:47:26 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:36:55.070 06:47:26 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:55.070 06:47:26 keyring_linux -- keyring/linux.sh@26 -- # [[ 1039882083 == \1\0\3\9\8\8\2\0\8\3 ]] 00:36:55.070 06:47:26 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1039882083 00:36:55.070 06:47:26 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:36:55.070 06:47:26 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:55.070 Running I/O for 1 seconds... 00:36:56.001 11456.00 IOPS, 44.75 MiB/s 00:36:56.001 Latency(us) 00:36:56.001 [2024-11-20T05:47:27.837Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:56.001 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:56.001 nvme0n1 : 1.01 11464.24 44.78 0.00 0.00 11099.47 8592.50 19806.44 00:36:56.001 [2024-11-20T05:47:27.837Z] =================================================================================================================== 00:36:56.001 [2024-11-20T05:47:27.837Z] Total : 11464.24 44.78 0.00 0.00 11099.47 8592.50 19806.44 00:36:56.001 { 00:36:56.001 "results": [ 00:36:56.001 { 00:36:56.001 "job": "nvme0n1", 00:36:56.001 "core_mask": "0x2", 00:36:56.001 "workload": "randread", 00:36:56.001 "status": "finished", 00:36:56.001 "queue_depth": 128, 00:36:56.001 "io_size": 4096, 00:36:56.001 "runtime": 1.010534, 00:36:56.001 "iops": 11464.235740707389, 00:36:56.001 "mibps": 44.78217086213824, 00:36:56.001 "io_failed": 0, 00:36:56.001 "io_timeout": 0, 00:36:56.001 "avg_latency_us": 11099.465842612573, 00:36:56.001 "min_latency_us": 8592.497777777779, 00:36:56.001 "max_latency_us": 19806.435555555556 00:36:56.001 } 00:36:56.001 ], 00:36:56.001 "core_count": 1 00:36:56.001 } 00:36:56.001 06:47:27 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:56.001 06:47:27 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:56.259 06:47:28 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:56.259 06:47:28 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:56.259 06:47:28 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:56.259 06:47:28 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:56.259 06:47:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:56.259 06:47:28 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:56.517 06:47:28 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:56.517 06:47:28 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:56.517 06:47:28 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:56.517 06:47:28 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:56.517 06:47:28 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:36:56.517 06:47:28 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:56.517 06:47:28 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:36:56.517 06:47:28 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:56.517 06:47:28 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:36:56.517 06:47:28 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:36:56.517 06:47:28 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:56.517 06:47:28 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:56.775 [2024-11-20 06:47:28.567860] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:56.775 [2024-11-20 06:47:28.568004] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a52c0 (107): Transport endpoint is not connected 00:36:56.775 [2024-11-20 06:47:28.568992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18a52c0 (9): Bad file descriptor 00:36:56.775 [2024-11-20 06:47:28.569991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:56.775 [2024-11-20 06:47:28.570014] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:56.775 [2024-11-20 06:47:28.570033] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:56.775 [2024-11-20 06:47:28.570064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:56.775 request: 00:36:56.775 { 00:36:56.775 "name": "nvme0", 00:36:56.775 "trtype": "tcp", 00:36:56.775 "traddr": "127.0.0.1", 00:36:56.775 "adrfam": "ipv4", 00:36:56.775 "trsvcid": "4420", 00:36:56.775 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:56.775 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:56.775 "prchk_reftag": false, 00:36:56.775 "prchk_guard": false, 00:36:56.775 "hdgst": false, 00:36:56.775 "ddgst": false, 00:36:56.775 "psk": ":spdk-test:key1", 00:36:56.775 "allow_unrecognized_csi": false, 00:36:56.775 "method": "bdev_nvme_attach_controller", 00:36:56.775 "req_id": 1 00:36:56.775 } 00:36:56.775 Got JSON-RPC error response 00:36:56.775 response: 00:36:56.775 { 00:36:56.775 "code": -5, 00:36:56.775 "message": "Input/output error" 00:36:56.775 } 00:36:56.775 06:47:28 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:36:56.775 06:47:28 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:36:56.775 06:47:28 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:36:56.775 06:47:28 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@33 -- # sn=1039882083 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1039882083 00:36:56.775 1 links removed 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:56.775 06:47:28 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:56.776 06:47:28 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:56.776 06:47:28 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:56.776 06:47:28 keyring_linux -- keyring/linux.sh@33 -- # sn=702013932 00:36:56.776 06:47:28 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 702013932 00:36:56.776 1 links removed 00:36:56.776 06:47:28 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2286546 00:36:56.776 06:47:28 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 2286546 ']' 00:36:56.776 06:47:28 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 2286546 00:36:56.776 06:47:28 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:36:56.776 06:47:28 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:56.776 06:47:28 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2286546 00:36:57.033 06:47:28 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_1 00:36:57.033 06:47:28 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_1 = sudo ']' 00:36:57.033 06:47:28 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2286546' 00:36:57.033 killing process with pid 2286546 00:36:57.033 06:47:28 keyring_linux -- common/autotest_common.sh@971 -- # kill 2286546 00:36:57.033 Received shutdown signal, test time was about 1.000000 seconds 00:36:57.033 00:36:57.033 Latency(us) 00:36:57.033 [2024-11-20T05:47:28.869Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:57.033 [2024-11-20T05:47:28.869Z] =================================================================================================================== 00:36:57.033 [2024-11-20T05:47:28.869Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:57.033 06:47:28 keyring_linux -- common/autotest_common.sh@976 -- # wait 2286546 00:36:57.033 06:47:28 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2286534 00:36:57.033 06:47:28 keyring_linux -- common/autotest_common.sh@952 -- # '[' -z 2286534 ']' 00:36:57.033 06:47:28 keyring_linux -- common/autotest_common.sh@956 -- # kill -0 2286534 00:36:57.033 06:47:28 keyring_linux -- common/autotest_common.sh@957 -- # uname 00:36:57.033 06:47:28 keyring_linux -- common/autotest_common.sh@957 -- # '[' Linux = Linux ']' 00:36:57.033 06:47:28 keyring_linux -- common/autotest_common.sh@958 -- # ps --no-headers -o comm= 2286534 00:36:57.290 06:47:28 keyring_linux -- common/autotest_common.sh@958 -- # process_name=reactor_0 00:36:57.290 06:47:28 keyring_linux -- common/autotest_common.sh@962 -- # '[' reactor_0 = sudo ']' 00:36:57.290 06:47:28 keyring_linux -- common/autotest_common.sh@970 -- # echo 'killing process with pid 2286534' 00:36:57.290 killing process with pid 2286534 00:36:57.290 06:47:28 keyring_linux -- common/autotest_common.sh@971 -- # kill 2286534 00:36:57.290 06:47:28 keyring_linux -- common/autotest_common.sh@976 -- # wait 2286534 00:36:57.550 00:36:57.550 real 0m5.277s 00:36:57.550 user 0m10.485s 00:36:57.550 sys 0m1.603s 00:36:57.550 06:47:29 keyring_linux -- common/autotest_common.sh@1128 -- # xtrace_disable 00:36:57.550 06:47:29 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:57.550 ************************************ 00:36:57.550 END TEST keyring_linux 00:36:57.550 ************************************ 00:36:57.550 06:47:29 -- spdk/autotest.sh@307 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@329 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:57.550 06:47:29 -- spdk/autotest.sh@362 -- # [[ 0 -eq 1 ]] 00:36:57.550 06:47:29 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:57.550 06:47:29 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:57.550 06:47:29 -- spdk/autotest.sh@374 -- # [[ '' -eq 1 ]] 00:36:57.550 06:47:29 -- spdk/autotest.sh@381 -- # trap - SIGINT SIGTERM EXIT 00:36:57.550 06:47:29 -- spdk/autotest.sh@383 -- # timing_enter post_cleanup 00:36:57.550 06:47:29 -- common/autotest_common.sh@724 -- # xtrace_disable 00:36:57.550 06:47:29 -- common/autotest_common.sh@10 -- # set +x 00:36:57.550 06:47:29 -- spdk/autotest.sh@384 -- # autotest_cleanup 00:36:57.550 06:47:29 -- common/autotest_common.sh@1394 -- # local autotest_es=0 00:36:57.550 06:47:29 -- common/autotest_common.sh@1395 -- # xtrace_disable 00:36:57.550 06:47:29 -- common/autotest_common.sh@10 -- # set +x 00:37:00.080 INFO: APP EXITING 00:37:00.080 INFO: killing all VMs 00:37:00.080 INFO: killing vhost app 00:37:00.080 INFO: EXIT DONE 00:37:01.013 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:01.013 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:01.013 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:01.013 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:01.013 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:01.013 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:01.013 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:01.013 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:01.013 0000:0b:00.0 (8086 0a54): Already using the nvme driver 00:37:01.013 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:01.013 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:01.013 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:01.013 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:01.013 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:01.013 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:01.013 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:01.013 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:02.387 Cleaning 00:37:02.387 Removing: /var/run/dpdk/spdk0/config 00:37:02.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:02.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:02.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:02.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:02.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:02.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:02.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:02.387 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:02.387 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:02.387 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:02.387 Removing: /var/run/dpdk/spdk1/config 00:37:02.387 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:02.387 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:02.387 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:02.387 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:02.387 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:02.387 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:02.387 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:02.387 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:02.387 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:02.387 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:02.387 Removing: /var/run/dpdk/spdk2/config 00:37:02.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:02.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:02.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:02.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:02.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:02.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:02.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:02.387 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:02.387 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:02.387 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:02.387 Removing: /var/run/dpdk/spdk3/config 00:37:02.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:02.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:02.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:02.387 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:02.645 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:02.645 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:02.645 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:02.645 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:02.645 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:02.645 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:02.645 Removing: /var/run/dpdk/spdk4/config 00:37:02.645 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:02.645 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:02.645 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:02.645 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:02.645 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:02.645 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:02.645 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:02.645 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:02.645 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:02.645 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:02.645 Removing: /dev/shm/bdev_svc_trace.1 00:37:02.645 Removing: /dev/shm/nvmf_trace.0 00:37:02.645 Removing: /dev/shm/spdk_tgt_trace.pid1965392 00:37:02.645 Removing: /var/run/dpdk/spdk0 00:37:02.645 Removing: /var/run/dpdk/spdk1 00:37:02.645 Removing: /var/run/dpdk/spdk2 00:37:02.645 Removing: /var/run/dpdk/spdk3 00:37:02.646 Removing: /var/run/dpdk/spdk4 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1963709 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1964460 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1965392 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1965732 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1966429 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1966569 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1967280 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1967412 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1967672 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1968875 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1969924 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1970197 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1970439 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1970654 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1970852 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1971009 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1971165 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1971476 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1971667 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1974174 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1974339 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1974509 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1974627 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1974943 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1975061 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1975376 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1975482 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1975669 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1975694 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1975969 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1975988 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1976489 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1976642 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1976849 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1978976 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1981607 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1988742 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1989150 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1991672 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1991899 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1994952 00:37:02.646 Removing: /var/run/dpdk/spdk_pid1998870 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2001011 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2007433 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2012668 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2013988 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2014660 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2024927 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2027336 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2054988 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2058249 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2062122 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2066483 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2066521 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2067074 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2067795 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2068493 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2069404 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2069407 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2069666 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2069679 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2069803 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2070344 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2070997 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2071652 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2072053 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2072064 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2072215 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2073219 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2073944 00:37:02.646 Removing: /var/run/dpdk/spdk_pid2079171 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2107312 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2110239 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2111417 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2112738 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2112884 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2112987 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2113095 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2113602 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2114924 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2115668 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2116103 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2118190 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2118644 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2119203 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2121730 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2125011 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2125012 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2125013 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2127233 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2132072 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2134730 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2138640 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2139472 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2140566 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2141652 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2144419 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2146997 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2149365 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2153597 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2153609 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2156513 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2156702 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2156895 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2157248 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2157287 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2160442 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2160897 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2163560 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2165425 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2168970 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2172305 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2178801 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2183276 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2183286 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2196424 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2196949 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2197358 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2197768 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2198352 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2198885 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2199302 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2199716 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2202230 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2202476 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2206292 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2206345 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2209715 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2212335 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2219368 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2219778 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2222271 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2222438 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2225059 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2229367 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2231441 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2237787 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2242993 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2244177 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2244840 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2255133 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2257280 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2259297 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2264966 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2264971 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2267910 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2269387 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2270793 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2271540 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2272938 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2273815 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2279126 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2279492 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2279884 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2281444 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2281771 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2282122 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2284571 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2284586 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2286062 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2286534 00:37:02.905 Removing: /var/run/dpdk/spdk_pid2286546 00:37:02.905 Clean 00:37:03.164 06:47:34 -- common/autotest_common.sh@1451 -- # return 0 00:37:03.164 06:47:34 -- spdk/autotest.sh@385 -- # timing_exit post_cleanup 00:37:03.164 06:47:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:03.164 06:47:34 -- common/autotest_common.sh@10 -- # set +x 00:37:03.164 06:47:34 -- spdk/autotest.sh@387 -- # timing_exit autotest 00:37:03.164 06:47:34 -- common/autotest_common.sh@730 -- # xtrace_disable 00:37:03.164 06:47:34 -- common/autotest_common.sh@10 -- # set +x 00:37:03.164 06:47:34 -- spdk/autotest.sh@388 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:03.164 06:47:34 -- spdk/autotest.sh@390 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:03.164 06:47:34 -- spdk/autotest.sh@390 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:03.164 06:47:34 -- spdk/autotest.sh@392 -- # [[ y == y ]] 00:37:03.164 06:47:34 -- spdk/autotest.sh@394 -- # hostname 00:37:03.164 06:47:34 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:03.422 geninfo: WARNING: invalid characters removed from testname! 00:37:35.494 06:48:05 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:38.017 06:48:09 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:41.294 06:48:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:44.572 06:48:15 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:47.096 06:48:18 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:50.372 06:48:21 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:53.717 06:48:24 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:37:53.717 06:48:24 -- spdk/autorun.sh@1 -- $ timing_finish 00:37:53.717 06:48:24 -- common/autotest_common.sh@736 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:37:53.717 06:48:24 -- common/autotest_common.sh@738 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:37:53.717 06:48:24 -- common/autotest_common.sh@739 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:37:53.717 06:48:24 -- common/autotest_common.sh@742 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:53.717 + [[ -n 1893136 ]] 00:37:53.717 + sudo kill 1893136 00:37:53.729 [Pipeline] } 00:37:53.747 [Pipeline] // stage 00:37:53.753 [Pipeline] } 00:37:53.770 [Pipeline] // timeout 00:37:53.774 [Pipeline] } 00:37:53.788 [Pipeline] // catchError 00:37:53.793 [Pipeline] } 00:37:53.807 [Pipeline] // wrap 00:37:53.813 [Pipeline] } 00:37:53.825 [Pipeline] // catchError 00:37:53.834 [Pipeline] stage 00:37:53.836 [Pipeline] { (Epilogue) 00:37:53.849 [Pipeline] catchError 00:37:53.851 [Pipeline] { 00:37:53.866 [Pipeline] echo 00:37:53.868 Cleanup processes 00:37:53.873 [Pipeline] sh 00:37:54.159 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:54.159 2297848 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:54.173 [Pipeline] sh 00:37:54.456 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:37:54.456 ++ grep -v 'sudo pgrep' 00:37:54.456 ++ awk '{print $1}' 00:37:54.456 + sudo kill -9 00:37:54.456 + true 00:37:54.468 [Pipeline] sh 00:37:54.751 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:04.749 [Pipeline] sh 00:38:05.034 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:05.034 Artifacts sizes are good 00:38:05.048 [Pipeline] archiveArtifacts 00:38:05.055 Archiving artifacts 00:38:05.168 [Pipeline] sh 00:38:05.450 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:05.464 [Pipeline] cleanWs 00:38:05.473 [WS-CLEANUP] Deleting project workspace... 00:38:05.473 [WS-CLEANUP] Deferred wipeout is used... 00:38:05.480 [WS-CLEANUP] done 00:38:05.482 [Pipeline] } 00:38:05.497 [Pipeline] // catchError 00:38:05.507 [Pipeline] sh 00:38:05.794 + logger -p user.info -t JENKINS-CI 00:38:05.802 [Pipeline] } 00:38:05.814 [Pipeline] // stage 00:38:05.818 [Pipeline] } 00:38:05.830 [Pipeline] // node 00:38:05.835 [Pipeline] End of Pipeline 00:38:05.868 Finished: SUCCESS